Disclosing Proof-of-Concept (PoC) exploits for vulnerabilities: A defender's point of view

Disclosure of vulnerabilities

I generally don’t pitch in on the “responsible PoC disclosure” debate, as I don’t actively contribute to the area and rather let others like https://www.twitter.com/maddiestone, https://www.twitter.com/taviso, https://www.twitter.com/AndreaBarisani, or https://www.twitter.com/Fox0x01 argue (and enjoy learning the arguments). This blog post originated from a Twitter thread, and I was asked to create a more permanent link, so here goes a slightly extended version…

Context

Responsible/coordinated/timed disclosure is continuously a topic of heated debate, even more so when PoC (Proof of Concept) exploit code is included in the release. I no longer want to argue if releasing details of a discovered vulnerability is good or bad – that discussion is over, as far as I am concerned, and the majority consensus seems to align around coordinated publication of vulnerability details with a reasonable time window given to all involved parties to patch the bug (or, when that is not possible as in the case of e.g. hardware bugs, to create mitigations that make exploitation harder). What the reasonable time window is, is itself up for debate, but for software bugs a 90 day time window seems to have been established as a workable compromise for most cases. I have in the past argued strongly against keeping bug details quiet because that mostly helps advanced attackers, and I am not rehashing those arguments here, especially not in light of the current frontdoor/backdoor/nobody-but-us state trojan debate. However, there are valid arguments for and against going beyond describing the details of a bug and including a fully working PoC exploit with the disclosure that demonstrates how the bug can be used to gain e.g. elevated privileges or code execution on a system with that bug unfixed. This blog post is about exactly that argument.

Further context: I very rarely have a binary opinion on security topics, because - erm - most of the time it’s just hard and complicated and a nuanced discussion is better than sticking to one position and claiming it is the only answer. There are no silver bullets, especially not in information/computer security. Don’t trust anybody who claims to solve all your problems with this new thingie (Blockchain, I am looking at you!).

I am also taking the position of the defender’s side, i.e. building sandboxing, mitigation, and attack surface reduction measures on both an architectural (system) and in-depth (code) level.

PoCs are (almost always) good

In this case, I am arguing for the release of PoC code, even though that may not be the best approach in all cases – there are always exceptional cases that require special handling. But in the vast majority of cases, I want to see a PoC. There’s even a book about it (which, to my shame, I haven’t yet read).

From my defender’s point of view, I have 2 main arguments why releasing full PoCs (after notifying the author/vendor and giving a reasonable time frame for everybody to patch) is helpful:

  1. A full PoC is an optimal test case for anybody to check if that particular bug is still open. Yes, the PoC may not work even though the bug is still there (mitigations, random system changes, etc.). But a crasher (a PoC that only shows e.g. a memory safety issue by crashing the target process with input that triggers the bug) is much harder to test with than a deterministic, full PoC (which has been carefully developed to not only work around random elements in the target system but also do something “useful” like execute a log statement in the context of the target process). I simply don’t have the skills right now to develop such PoCs myself when given the abstract - even if detailed - bug descriptions, and many developers on the “builder” (i.e. defenders building and hardening systems, or creating new mitigations on the compiler or runtime level) side don’t either. There are very different skillsets involved, and that’s why we need people on both sides working together to learn from each other. Getting a free unit test is golden, even though - as with all testing - it cannot give complete coverage. That is, a full PoC can only demonstrate that the bug is still there, but the fact that it doesn’t execute correctly cannot show that the bug has been fixed. Still, it is excellent for regression testing at the defender’s side and for testing if the patch has been applied for all users of that particular system.
  2. As pointed out, a PoC - especially when coming with an in-depth write-up of the techniques - is highly educational in the sense of practical exploitation techniques. Generalizing from a number of them, new mitigations may appear that make the next PoCs harder, or we shine the light on attack surfaces that seem particularly brittle and are thus in need of being sandboxed or otherwise having that surface reduced. We (hello https://www.twitter.com/i41nbeer, https://www.twitter.com/tehjh, https://www.twitter.com/againsthimself, https://www.twitter.com/kayseesee, https://www.twitter.com/5aelo, and https://www.twitter.com/halvarflake, among many others whose opinion I highly value) may fiercely disagree on the relative value of particular mitigations or sandboxing on top of each other (I think of it as layers of defense), but without detailed PoCs, we wouldn’t even have the debate.

You could add a third argument that availability of a PoC can help prioritize fixing a particular vulnerability and allocate the necessary resources among all the involved parties, but I find the other 2 arguments much more important in the long term.

So from a defender’s point of view, full PoC release sometimes makes life more hectic (cough, 7 days as one of the exceptions to the 90-day default ;-) ), but I am highly grateful to teams releasing them and allowing us to improve the code (and yes, sometimes a short disclosure window is the right call).