Monday, October 24, 2016

CyCon debate on the VEP

Saturday I participated in a panel at CyConUS 2016 with Ari Schwartz and Steven Bellovin on the various merits or lack thereof of the Vulnerability Equities Process.
That is not Ari's happy face.
Ari is, as you may remember from earlier posts on this blog, a huge fan of the VEP, and has many suggestions for codifying it in law, or at least Executive Order. My goal was to probe a bit and find out the root cause for why he thinks these are good ideas. The answer is simple: Ethics.

There's always a surprising section of otherwise well informed people who think that 0days are an ETHICAL issue, as opposed to a technology and policy issue. Much as we used to think that gentlemen don't read each other's mail, many policy makers continue to believe the argument by Microsoft that the very nature of holding 0days is a dirty business the government should have no part of, no matter what damage their position on this causes to our national interests.

This is partially due to Microsoft and other vendors pushing this argument for purely selfish reasons (their ethical arguments always coincide nicely with their business interests), but it's also an argument built out of a kind of weird jealousy and fear of the unknown. If you haven't worked with 0days, they are at once sexy and scary, like before you realize that dominatrices are just middle aged women with a shitty job. 

During the panel Ari insisted over and over that I didn't know wtf I was talking about, that I wasn't there. He got heated as he pondered who the hell this annoying brown dude in a ill-fitting suit was to talk about these things! But nothing can erase the clear truth that none of the important studies on how vulnerabilities work was done before the VEP started. They developed policy based on their gut instincts, without any recognition of how the problem worked in real life.

The only sane policy when it comes to the VEP is to not "lean towards disclosure" but only release vulnerabilities when they pose a clear and present danger and releasing the vulnerability would solve the problem, not add to it.

As I pointed out on the panel, the VEP is at its heart a transparency and confidence building measure, in the parlance of international relations people. However, imagine if you went to your wife and told her a really hot girl at work hit on you, but you told her no. Would that help your relationship? In other words, not all transparency and confidence building measures are good ideas. The VEP is exactly like that example: Are Microsoft and Google best friends now with the USG or are they simply wishing we'd handle the crypto issue and National Security Letters like adults?

I spent most of my time on the panel talking about the massive operational security issues in the VEP, both short and very long term. None of those issues have really been examined in the public as far as I can tell, and high level policy people like Ari have been mostly ignoring them. To make it simple: They are dire. Implementing the VEP the way we have has concretely damaged our most critical efforts in the cyber area.

But worse than that is the very idea of implementing any major policy around vulnerabilities without understanding at a deep technical level how they work. If you ask Ari this question alone, and he cannot answer it, you know we have failed: What percentage of the vulnerabilities we sent through the VEP were ones that our adversaries also had and were using? For bonus points ask him to name a few bug-classes and see what he says.

The problem with the VEP is really this. We didn't start by asking the right questions. We started with an ethical judgement we got hypnotized into by the marketing teams in Silicon Valley and hoped for the best. 


1 comment:

  1. As a defender, I do agree with Ari that patching is important. However, I disagree that it is the most important thing that I can do.

    Significantly more powerful than patching are tools (like portions of EMET and SELinux) that go after entire attack vectors, not just one particular issue. Those tools allow me to kill entire classes of vulnerabilities (yes, they have their issues, yes, they can be bypassed, but they still increase cost for the attacker and that is a Good Thing).

    So, rather than VEP, a more useful disclosure process would be if the exploit developers let the defenders know generally what attack surface they were working on, so the defenders could develop similar tools that would kill the entire class. If you're being paranoid about national security and your ability to continue to attack outside the US, then only pass that over to .gov/.mil.

    Another problem with VEP (and disclosure in general) is that many companies have their bugtrackers completely public. I've seen many researchers hit paydirt just by digging through that information.

    ReplyDelete