Thursday, April 20, 2017

Making a NEW and NextGen AntiVirus Company out of DHS instead of an old and busted one


So I have yet another exciting policy proposal based on how the USG can't trust any software vendor's remediation process to be beyond control of the FSB. :)

You can see in the DHS a tiny shadow of an anti-virus company. EINSTEIN and Threat Intelligence and incident response, and managed penetration testing - the whole works. But we're kinda doing it without realizing what we're building. And why not develop real next-gen infosec companies instead?

In fact, the way using secret USG information would work best is if we could use it ALL AT ONCE. Instead of publishing reports, and giving the Russians time to upgrade all their trojans as various companies react at different times, we can FLASH UNINSTALL every variant of a single Russian trojan, as if we were FireEye, on any company that opts-in to our system.

Also, why should we rely on Microsoft's patches when we can, as soon as we need to, make our own USG-deved patches with something like 0patch.com? Not doing this, seems like being horribly unprepared for real-world events like leaks, no?

Why can't I sign up to the DHS "behavioral analysis" AI endpoint protection for my company, which has a neural network trained not just on open-source malware, but on the latest captured Russian trojans? 

Think Next Gen people! :)

Alternative Theories

Fact 1: ShadowBrokers release was either "Old-Day" or "Patched"
Fact 2: Microsoft PR claims no individual or organization told them (found them all internally, eh?)

And of course, Fact 3: the US-CERT response to the ShadowBroker's earlier announcements.

So there are a lot of possibilities here that remain unexplored. I know the common thought (say, on Risky.biz) is that the Vulnerability Equities Process jumped into action, and helped MS with these bugs and then the patches came out JUST IN TIME.

Question: Why would the US not publicize, as Susan Hennessey has suggested, this effort from the VEP?

Fact 4: The SB release was on Friday, three short days after MS Patch Tuesday.

One possibility is that the SB team tested all their bugs in a trivial way by running them against the patched targets, then released when nothing worked anymore. But no pro team works this way, because a lot of time "patches" break exploits by mistake, and with a minor change, you can re-enable your access.

Another possibility is that the ShadowBroker's team reverse engineered everything in the patch, realized their stolen bugs were really and truly fixed, and then released. That's some oddly fast RE work.

Maybe the SB has a source/access inside the USG team that makes up the VEP or is connected in some way (they had to get this information somehow!), and is able to say definitively these bugs were getting fixed conclusively, and doesn't have to do any reverse engineering.

If the SB is FSB, then it seems likely that they have a source inside Microsoft or access to the patch or security or QA team, and were able to get advanced notice of the patches. This presents some further dilemmas and "Strategy Opportunities". Or, as someone pointed out, they could have access to MAPP, assuming these bugs went through the MAPP process.

One thing I think missed in the discussion is that Microsoft's Security strategy is in many ways, subordinate to a PR strategy. This makes sense if you think of Microsoft a company out to make money. What if we take the Microsoft statement to Reuters at their word, and also note that Microsoft has the best and oldest non-State Intelligence service available in this space? In other words, maybe they did not get their vulnerability information from the VEP.

There are a ton of unanswered questions, and weird timings with this release, which I don't see explored, but maybe Grugq will do a more thorough piece. I wanted to explore this much to point out one quick thing: The USG can not trust the integrity of Microsoft's networks or decision makers when it comes to national security interests.


Wednesday, April 19, 2017

0-12 and some duct tape

In a recent podcast Susan Hennessey at about seven minutes in says:
"...The authors here are from rather different communities, attorneys, private industry, non-legal policy areas, technical people, and again and again when we talk about cyber policy issues there's this concern that lawyers don't know enough about technology or technologists don't know enough about policy and there's this idea that there's this mythical person that's going to emerge that knows how to code and knows the law and has this really sharp policy and political sensibility and we're going to have this cabbage patch and then cyber security will be fixed - that's never struck me as particularly realistic. . . ."

"I've heard technologists say many many times in the policy space that if you've never written a line of code you should put duct tape over your mouth when it comes to these discussions"

Rob Lee, who has a background in SCADA security, responds with tact saying "Maybe we can at least drive the policy discussion with things that are at least a bit technically feasible."

He adds "You don't have to be technical, but you do have to be informed by the technical community and its priorities".

He's nicer than I am, but I'm also writing a paper with Sandro for NATO policy makers and the thesis has been bugging me for weeks on "What I want Policy Makers to know about cyber war". So here goes:

  1. Non-state actors are as important as States
  2. Data and computation don't happen in any particular geo-political place, which has wide ramifications, and you're not going to like them
  3. We do not know what makes for secure code or secure networks. We literally have no idea what helps and what doesn't help. So trying to apply standards or even looking for "due diligence" on security practices is often futile (c.f FTC case on the HTC Phones)
  4. Almost all the useful historical data on cyber is highly classified, and this makes it hard to make policy, and if you don't have data, you should not make policy (c.f. the Vulnerability Equities Process) because what you're doing is probably super wrong
  5. Surveillance software is the exact same thing as intrusion detection software
  6. Intrusion software is the exact same thing as security assessment and penetration testing software
  7. Packets cannot be "American or Foreign" which means a lot of our intel community is using outdated laws and practices
  8. States cannot hope to control or even know what cyber operations take place "within their borders" because the very concept makes almost no sense
  9. Releasing information on vulnerabilities has far ranging consequences both in the future and for your past operations and it's unlikely to useful to have simple policies on these sorts of things
  10. No team is entirely domestic anymore - every organization and company is multi-national to the core
  11. In the cyber world, academia is almost entirely absent from influential thought leadership. This was not the case in the nuclear age when our policy structures were born, and all the top nuclear scientists worked at Universities. The culture of cyber thinkers (and hence doers) is a strange place, and in ways that will both astonish and annoy you, but also in ways which are strategically relevant.
  12. Give up thinking about "Defense" and "Offense" and start thinking about what is being controlled by what, or in other words what thing is being informed or instrumented or manipulated by what other thing
  13. Monitoring and manipulation are basically the same thing and have the same risks
  14. Software does not have a built in "intent". In fact, code and data are the same thing. Think of it this way, if I control everything you see and hear, can I control what you do? That's because code and data are the same, like energy and matter.

If I had to answer Susan's question, I'd say the less tactful version of Rob's answer. Which is that in fact we are now in a place where those cabbage patch dolls are becoming prominent. Look at John De Long, who was a technologist sitting next to me before he became a lawyer, and Lily Ablon, and Ryan Speers, Rob Joyce, and a host of others, who all have deep technological experience before they became policy people. It's just the other side of the story is that every Belfer center post-grad or "Cyber Law and Policy Professor" with no CS experience of any kind has to leave the field and go spend some time doing bug bounties or pen testing or incident response for a while to get some chops.

But think of it this way, the soccer game's score is 0-12, and not in your favor. Wouldn't you want to change the lineup for the second half?

Monday, April 17, 2017

Fusion Centers


So the Grugq does great stand up - his timing and sense of using words is amazing. But it is important to remember that when I met him, a million years ago, he was not pontificating. He was, as I was, working for @stake and on the side writing Solaris kernel rootkits. Since then he's spent a couple decades sitting in cyber-land, getting written up by Forbes, and hanging out in Asia talking to actual hackers about stuff. My point is that he's a native in the lingo, unlike quite a lot of other people who write and talk about the subject.

Which is why I found his analysis of Chinese Fusion Centers (see roughly 35 minutes in) very interesting. Because if you're building cyber norms or trying to enforce them, you have to understand the mechanisms other countries use to govern their cyber capabilities all the way to the ground floor. It's not all "confidence building measures" and other International Relations Alchemy. I haven't been able to find any other open source information on how this Fusion Center process works in China, which is why I am pointing you at this talk. [UPDATE: here is one, maybe this, also this book]

Likewise, the perspectives of foreign SIGINT programs that the US has decided to Gerrymander the cyber norms process is fascinating. "What we are good at is SUPER OK, and what you are good at is NOT GOOD CYBER NORMS" is the US position according to the rest of the world, especially when it comes to our stance on economic espionage over cyber. This is an issue we need to address.


Saturday, April 15, 2017

VEP: When disclosure is not disclosure.

True story, yo.

I want to tell a personal story of tragedy and woe to illustrate a subtle point that apparently is not well known in the policy sect. That point is that sometimes, even when an entire directory of tools and exploits leaks, your bugs still survive, hiding in plain sight.

A bunch of years ago, one of my 0days leaked in a tarball of other things, and became widely available. At the time, we used it as training - porting it to newer versions of an OS or to a related OS was a sort of fun practice for new people, and also useful.

And when it leaked, I assumed the gig was up. Everyone would play with it, and not just kill that bug, but the whole technique around the exploitation and the attack surface it resided in.

And yet, it never happened. Fifteen years later only one person has even realized what it was, and when he contacted us, we sent him a more recent version of the exploit, and then he sent back a much better version, in his own style, and then he STFU about it forever.

I see this aspect in the rest of the world too - the analysis of a leaked mailspool or toolset is more work than the community at large is going to put into it. People are busy. Figuring out which vulnerability some exploit targets and how requires extreme expertise and effort in most cases.

So I have this to say: Just because your adversary or even the entire world has a copy of your exploit, does not mean it is 100% burnt. And you have to add this kind of difficult calculus to any VEP decision. It happens all the time, and I've seen the effects up close.

ShadowBrokers, the VEP, and You

Quoting Nicolas Weaver in his latest Lawfare article about the ShadowBroker's Windows 0days release, which has a few common thematic errors as relates to the VEP:
This dump also provides significant ammunition for those concerned with the US government developing and keeping 0-day exploits. Like both previous Shadow Brokers dumps, this batch contains vulnerabilities that the NSA clearly did not disclose even after the tools were stolen. This means either that the NSA can’t determine which tools were stolen—a troubling possibility post-Snowden—or that the NSA was aware of the breach but failed to disclose to vendors despite knowing an adversary had access. I’m comfortable with the NSA keeping as many 0-days affecting U.S. systems as they want, so long as they are NOBUS (Nobody But Us). Once the NSA is aware an adversary knows of the vulnerabilities, the agency has an obligation to protect U.S. interests through disclosure.

This is a common feeling. The idea that "when you know an adversary has it, you should release it to the vendor". And of course, hilariously, this is what happened in this particular case, where we learned a few interesting things.

"No individual or organization has contacted us..."

"Yet mysteriously all the bugs got patched right before the ShadowBroker's release!"
We also learned that either the Russians have not penetrated the USG->Microsoft communication channel and Microsoft's security team, or else Snowden was kept out of the loop, from his tweets chiding the USG for not helping MS.

This is silly because codenames are by definition unclassified, and having a LIST OF CODENAMES and claiming you have the actual exploits does not mean anything has really leaked.

The side-understanding here, is that the USG has probably penetrated ShadowBrokers to some extent. Not only were they certain that ShadowBrokers had the real data, but they also seem to have known their timeframe for leaking it...assuming ShadowBrokers didn't do their release after noticing many of the bugs were patched.

And this is the information feed that is even more valuable than the exploits: What parts of your adversary have you penetrated? Because if we send every bug to MS that the Russians have, then the Russians know we've penetrated their comms. That's why a "kill all bugs we know the Russians have" rule as @ncweaver posits and which is often held as a "common-sense policy" is dangerous and unrealistic without taking into consideration extremely complex OPSEC requirements for your sources. Any patch is an information feed from you, about your most sensitive operations, to your enemy. We can do so only with extreme caution.

Of course the other possibility, looking at this timeline carefully, is that the ShadowBrokers IS the USG. Because the world of mirrors is a super fun place, is why. :)




Tuesday, April 11, 2017

"Don't capture the flag"

Technically Rooted Norms


In Lawfare I critiqued an existing and ridiculous norms proposal from Carnegie Endowment for International Peace. But many people find my own proposal a bit vague, so I want to un-vague it up a bit here on a more technical blog. :)

Let's start with a high level proposal and work down into some exciting details as follow from the original piece:
"To that end, I propose a completely different approach to this particular problem. Instead of getting the G20 to sign onto a doomed lofty principle of non-interference, let’s give each participating country 50 cryptographic tokens a year, which they can distribute as they see fit, even to non-participating states. When any offensive teams participating in the scheme see such tokens on a machine or network service, they will back off. 
While I hesitate to provide a full protocol spec for this proposal in a Lawfare post, my belief is that we do have the capability to do this, from both a policy and technical capacity. The advantages are numerous. For example, this scheme works at wire speed, and is much less likely to require complex and ambiguous legal interpretation."

FAQ for "Don't Capture the Flag" System


Q: I’m not sure how your proposal works. Banks pick their most sensitive data sets, the ones they really can’t afford to have attacked, and put a beacon on those sets so attackers know when they’ve found the crown jewels? But it all works out for the best because a lot of potential attackers have agreed to back off when they do find the crown jewels? ;-)

A: Less a beacon than a cryptographic signature really. But of course for a working system you need something essentially steganographic, along with decoys, and a revocation system, and many other slightly more complex but completely workable features that your local NSA or GCHQ person could whip up in 20 minutes on a napkin using things laying around on GitHub.
Also, ideally you want a system that could be sent via the network as well as stored on hosts. In addition, just because you have agreed upon it with SOME adversaries, doesn't mean you publish the scheme for all adversaries to read.

Q: I think the problem is that all it takes for the system to produce a bad outcome is one non-compliant actor, who can treat the flags not as “keep out” signs but as “treasure here” signs. I’d like a norm system in which we had 80% compliance, but not at the cost of tipping the other 20% off whenever they found a file that maximized their leverage.

A: I agree of course, and to combat this you have a few features:
1. Enough tokens that you have the ability to put some on honeypots
2. Leaks, which as much as we hate them would provide transparency on this subject retrospectively, and of course, our IC will monitor for transgressions in our anti-hacker operations
3. The fact that knowing whether something is important is often super-easy anyways. It's not like we are confused where the important financial systems are in a network. 

Ok, so that's that! Hopefully that helps or gives the scheme's critiques more to chew on. :)