Zero-Days and Capture-the-Flag (CTF)
Matthew Carpenter
Cybersecurity Innovator @ GRIMM Cyber | Vulnerability Research | Reverse-Engineering | Critical Infrastructure | Cyber-Physical
I recently read a CSO Online article with a title that likely sparks the interest of many cybersecurity leaders: "Are capture-the-flag participants obligated to report zero days?"
My first reaction was, "Excuse me, What?"
I'm happy to see Capture the Flag events (CTFs) on the radar of C-level executives, as I believe they can add a great deal of value to a participant and their organization. The learning benefits of participating in CTFs can be exceptional for the right people, in the right circumstances.
However, I think this article from CSO is leaving the waters too murky. Let me attempt to add a little more context.
Capture-the-Flag events range from the simple to the "Uber" level of difficulty. They also can range in focus. For example, some CTF's may focus on network configuration and out-of-date/unpatched software to create environments to exploit; other CTFs focus on finding and exploiting vulnerabilities in a web application environment; while others require binary-reverse-engineering and discovering vulnerabilities in compiled programs. There are other types of CTFs as well, but there is some commonality among CTFs:
In a nutshell, during the course of nearly all CTF events, the discovery of important vulnerabilities is exceedingly rare.
For some of these CTFs (eg. Defcon CTF, currently put on by the Nautilus Institute), the very nature of the bugs mean the exploits are technically "zero-days," but for custom-built software that is only used in the game. For the game to focus on finding previously undiscovered security flaws in a normal piece of software would be quite note-worthy (and quite frequently, shunned). The closest thing that might qualify would be Pwn2Own at CanSecWest each year. However, the very purpose of Pwn2Own is to benefit the public and provide compensation to bug hunters (like a very adhoc "bug bounty").
And this brings us to the next point.
I remember 24 years ago (before my own discovery of reversing and vulnerability research), I was outraged at the "No More Free Bugs" initiative pushed by Dino Dai Zovi , Charlie Miller , and Alex Sotirov. To my way of thinking, solving security flaws was everyone's job! If you knew of a flaw and didn't report it to the vendor, you were part of the problem. Yeah, I didn't get it. And then....
Then I discovered vulnerability research and bug-hunting. I have spent the better part of two decades building a career around it. It is not easy.
Some bugs are easy to find. Some bugs you trip upon on a random Tuesday morning because you tied your shoe-laces the other direction. But that's not even close to the full story.
Make no mistake: Bug-hunting is grueling hard work, and takes a special set of skills and mindset to do well. You fail a lot between successes. You're constantly bending your mind to what it takes to test a system, what aberrant behavior of your target means, and how to prove a bug is a bug. Some bugs require 186 different things to occur before the bug presents itself. And often you simply don't know when investigating. And if the target software is not open-source, you most often must answer all these questions using the bits provided from a compiler (aka. machine code and packaging).
领英推荐
I spend a lot of time thinking through how code works, and how to improve existing tools to highlight areas of interest while marking other areas as low value. Much of the cost of finding vulnerabilities is the veritable sea of code we may need to understand to determine if a particular piece of code is vulnerable.
To put it bluntly, bugs are currency. And they *should* be.
Vulnerability researchers of any value are rare enough, and their brains are valuable. Oh, and by the way, they didn't put the bugs there. Software developers did. Bugs are the responsibility of the software vendors, and they're the users' problem.
As a career researcher, I'll be the first to tell you that some vendors understand this an go to great lengths to find flaws in their designs, their processes, and their finished code/devices. That is how it should be. I would love to sing their praises here, but I'm not in the habit of naming customers without their permission, and it may seem self-serving.
Other uses...
While we all would like to avoid being exploited by a vulnerability... these vulnerabilities are highly sought after, by many entities with varied reasons.
I won't go into whether sending the bug to a vendor is the best option, but there may be other uses for that information. But not everyone is "pro-fixing". To quote my friend Travis Goodspeed, "Some people want to help the bugs live long and fruitful lives!" Cyber-warfare and other iterations of "offensive cyber" are largely based on these undiscovered/unpatched vulnerabilities. Reference Cyber Command
In conclusion
I hope you will take away a better understanding of the cyber-world around us. Understanding motivations, realities, fears, and constraints of vulnerabilities and exploits is key to affecting change. Discovered vulnerabilities are and should be valuable. They should be treated as such.
We cannot rely on altruism in the software security space. Instead, we must learn about and accept the economy of vulnerabilities, and make wise decisions accordingly.
Matt
(For more information on Bug Bounties, which are wonderful ways for bugs to be squished while rewarding the researchers that hunt them, you can look at resources from companies which help put them together and run them for other companies, such as HackerOne or BugCrowd. Or you may find value from individual companies like Meta, Microsoft, Google, or even OpenAI)
Head of the Computer Information Systems Department at Grand Rapids Community College
1 年When Matt speaks, it's time to listen.