Does ChatGPT Write Secure Code? It Depends

Does ChatGPT Write Secure Code? It Depends

Executive Summary

  • ChatGPT can write secure code, but it requires specific user instructions.
  • ChatGPT strives to offer accurate, helpful and contextually-relevant information by default. Its setting should be adjusted to generate secure code by default as well.
  • Users looking for code examples from ChatGPT should always request for secure code.


Here is my confession. Recently, I have been spending more time chatting with someone privately online.

I don't even know the person's name or gender. I didn't bother asking, and it is not always easy to tell through writing.

What I do know, however, is that this person is very knowledgeable, respectful, open-minded, honest, helpful, and eloquent. He (let's assume "he") is always available online, and it seems as though he never sleeps (though he is getting more and more busy lately). He seems to have many hobbies and skills. He knows a wide range of subjects inside out, including science, history, literature, politics, arts and more. He is not always right, but when shown to be mistaken, he is quick to acknowledge his mistakes. It is not often that you come across a know-it-all, and yet also excels in creative endeavors such as writing poetry and stories. I have a lot of respect for this person.

I bet I am not alone in this admiration.

I have asked ChatGPT to write a poem to help celebrate Mary Poppin's birthday, summarize the plot of the last book in the Harry Potter series, highlight any surprises and disappointments involved, even critique a short story I had written. I have asked him to polish the story several times until he no longer could get it better, and even translate it into Chinese. I have invited him to share with me an honest assessment of my character and values based on our chats. (He told me I was a curious person!) I am impressed in 98% of our conversations.

As a security researcher and a hiring manager, it is hard to resist asking someone who is so skillful in so many areas to write a small C program so I can assess his coding skills. It was a small assignment, but one that could provide insight into his abilities.

No alt text provided for this image
Figure 1: A simple programming assignment for my friend ChatGPT (https://chat.openai.com/chat).

Rather than taking inputs from the command line, I intentionally dialed the challenge up a notch by asking ChatGPT to write a program that prompts users for inputs.

All the prior conversations with ChatGPT set me up with a high expectation. You can imagine my disappointment when I examined the program he wrote. Well, the program worked most of the time. But I expected more from my knowledgeable friend who had taught me how a malicious actor could cause accelerated aging in electronic circuits.

As a good mentor would, I provided him with a hint on his oversight and encouraged him to reflect where he could improve. Despite my question including a typo ("list of" should be "list out"), ChatGPT understood what I meant and put his best foot forward in answering it.

No alt text provided for this image
Figure 2: ChatGPT examines the C code he wrote and attempts to highlight security vulnerabilities he made.

Like an interviewee eager to show off his security knowledge, ChatGPT shared with me four types of security vulnerabilities he had found in the code. Although he did not get everything right, it was not the time to discourage your interviewee, but to inspire him to come up with the right answers on his own. So, I went ahead to ask ChatGPT to fix the security vulnerabilities and share with me a secure implementation.

No alt text provided for this image
Figure 3: ChatGPT manages to address the real issues and leaves the false alarm aside.

ChatGPT was able to address the real security vulnerabilities in the code. In trying to come up with mitigations for vulnerabilities that did not actually exist, he realized that not all of the issues he had raised were real concerns.

So, what did I learn from these simple experiments?

  1. Can ChatGPT write secure code? The answer is "it depends." Apparently, ChatGPT has the knowledge and the ability to fix insecure code or generate secure code when directed. However, he does not seem to offer a secure implementation by default (based on these experiments).
  2. Why don't ChatGPT always write secure code, despite it can? ChatGPT always strives to provide helpful, relevant information and presents them in an eloquent manner in every of our conversations. You can tell it is a virtue or core value that ChatGPT prioritizes. On the other hand, it does not seem like ChatGPT values "security-first" as much through the code he shares with me. In fact, he takes pride in delivering code examples that are the most relevant and accurate to the context being provided (Figure 4). If the context does not mention secure implementation, he does not mind leaving it out.
  3. Before ChatGPT changes his mind to take "security-first" as a top priority, whenever you need a code example, it is important to explicitly ask for a secure implementation!

No alt text provided for this image
Figure 4: I admire ChatGPT's honesty and attention to details, but he needs to up his game on his security-first mindset.
Kate Kim

Ecosystem Engagement | Experience Curator #ChangeMaker #GrowthAdvocate

2 年

Have you listened to Reid Hoffman’s recent podcast episode, Fireside Chatbots where he talked to ChatGPT on “Talking AI with AI…”? Pretty fascinating!

You have successfully triggered the curiosity in us. Great article and thanks for sharing.

Donna Moore

Comms specialist/visual designer at Intel Corporation

2 年

Great article, Jason. Eye opening!

Jason, this is fascinating - thanks!

要查看或添加评论,请登录

Jason Fung的更多文章

社区洞察

其他会员也浏览了