Thursday, April 20, 2017

Making a NEW and NextGen AntiVirus Company out of DHS instead of an old and busted one


So I have yet another exciting policy proposal based on how the USG can't trust any software vendor's remediation process to be beyond control of the FSB. :)

You can see in the DHS a tiny shadow of an anti-virus company. EINSTEIN and Threat Intelligence and incident response, and managed penetration testing - the whole works. But we're kinda doing it without realizing what we're building. And why not develop real next-gen infosec companies instead?

In fact, the way using secret USG information would work best is if we could use it ALL AT ONCE. Instead of publishing reports, and giving the Russians time to upgrade all their trojans as various companies react at different times, we can FLASH UNINSTALL every variant of a single Russian trojan, as if we were FireEye, on any company that opts-in to our system.

Also, why should we rely on Microsoft's patches when we can, as soon as we need to, make our own USG-deved patches with something like 0patch.com? Not doing this, seems like being horribly unprepared for real-world events like leaks, no?

Why can't I sign up to the DHS "behavioral analysis" AI endpoint protection for my company, which has a neural network trained not just on open-source malware, but on the latest captured Russian trojans? 

Think Next Gen people! :)

Alternative Theories

Fact 1: ShadowBrokers release was either "Old-Day" or "Patched"
Fact 2: Microsoft PR claims no individual or organization told them (found them all internally, eh?)

And of course, Fact 3: the US-CERT response to the ShadowBroker's earlier announcements.

So there are a lot of possibilities here that remain unexplored. I know the common thought (say, on Risky.biz) is that the Vulnerability Equities Process jumped into action, and helped MS with these bugs and then the patches came out JUST IN TIME.

Question: Why would the US not publicize, as Susan Hennessey has suggested, this effort from the VEP?

Fact 4: The SB release was on Friday, three short days after MS Patch Tuesday.

One possibility is that the SB team tested all their bugs in a trivial way by running them against the patched targets, then released when nothing worked anymore. But no pro team works this way, because a lot of time "patches" break exploits by mistake, and with a minor change, you can re-enable your access.

Another possibility is that the ShadowBroker's team reverse engineered everything in the patch, realized their stolen bugs were really and truly fixed, and then released. That's some oddly fast RE work.

Maybe the SB has a source/access inside the USG team that makes up the VEP or is connected in some way (they had to get this information somehow!), and is able to say definitively these bugs were getting fixed conclusively, and doesn't have to do any reverse engineering.

If the SB is FSB, then it seems likely that they have a source inside Microsoft or access to the patch or security or QA team, and were able to get advanced notice of the patches. This presents some further dilemmas and "Strategy Opportunities". Or, as someone pointed out, they could have access to MAPP, assuming these bugs went through the MAPP process.

One thing I think missed in the discussion is that Microsoft's Security strategy is in many ways, subordinate to a PR strategy. This makes sense if you think of Microsoft a company out to make money. What if we take the Microsoft statement to Reuters at their word, and also note that Microsoft has the best and oldest non-State Intelligence service available in this space? In other words, maybe they did not get their vulnerability information from the VEP.

There are a ton of unanswered questions, and weird timings with this release, which I don't see explored, but maybe Grugq will do a more thorough piece. I wanted to explore this much to point out one quick thing: The USG can not trust the integrity of Microsoft's networks or decision makers when it comes to national security interests.


Wednesday, April 19, 2017

0-12 and some duct tape

In a recent podcast Susan Hennessey at about seven minutes in says:
"...The authors here are from rather different communities, attorneys, private industry, non-legal policy areas, technical people, and again and again when we talk about cyber policy issues there's this concern that lawyers don't know enough about technology or technologists don't know enough about policy and there's this idea that there's this mythical person that's going to emerge that knows how to code and knows the law and has this really sharp policy and political sensibility and we're going to have this cabbage patch and then cyber security will be fixed - that's never struck me as particularly realistic. . . ."

"I've heard technologists say many many times in the policy space that if you've never written a line of code you should put duct tape over your mouth when it comes to these discussions"

Rob Lee, who has a background in SCADA security, responds with tact saying "Maybe we can at least drive the policy discussion with things that are at least a bit technically feasible."

He adds "You don't have to be technical, but you do have to be informed by the technical community and its priorities".

He's nicer than I am, but I'm also writing a paper with Sandro for NATO policy makers and the thesis has been bugging me for weeks on "What I want Policy Makers to know about cyber war". So here goes:

  1. Non-state actors are as important as States
  2. Data and computation don't happen in any particular geo-political place, which has wide ramifications, and you're not going to like them
  3. We do not know what makes for secure code or secure networks. We literally have no idea what helps and what doesn't help. So trying to apply standards or even looking for "due diligence" on security practices is often futile (c.f FTC case on the HTC Phones)
  4. Almost all the useful historical data on cyber is highly classified, and this makes it hard to make policy, and if you don't have data, you should not make policy (c.f. the Vulnerability Equities Process) because what you're doing is probably super wrong
  5. Surveillance software is the exact same thing as intrusion detection software
  6. Intrusion software is the exact same thing as security assessment and penetration testing software
  7. Packets cannot be "American or Foreign" which means a lot of our intel community is using outdated laws and practices
  8. States cannot hope to control or even know what cyber operations take place "within their borders" because the very concept makes almost no sense
  9. Releasing information on vulnerabilities has far ranging consequences both in the future and for your past operations and it's unlikely to useful to have simple policies on these sorts of things
  10. No team is entirely domestic anymore - every organization and company is multi-national to the core
  11. In the cyber world, academia is almost entirely absent from influential thought leadership. This was not the case in the nuclear age when our policy structures were born, and all the top nuclear scientists worked at Universities. The culture of cyber thinkers (and hence doers) is a strange place, and in ways that will both astonish and annoy you, but also in ways which are strategically relevant.
  12. Give up thinking about "Defense" and "Offense" and start thinking about what is being controlled by what, or in other words what thing is being informed or instrumented or manipulated by what other thing
  13. Monitoring and manipulation are basically the same thing and have the same risks
  14. Software does not have a built in "intent". In fact, code and data are the same thing. Think of it this way, if I control everything you see and hear, can I control what you do? That's because code and data are the same, like energy and matter.

If I had to answer Susan's question, I'd say the less tactful version of Rob's answer. Which is that in fact we are now in a place where those cabbage patch dolls are becoming prominent. Look at John De Long, who was a technologist sitting next to me before he became a lawyer, and Lily Ablon, and Ryan Speers, Rob Joyce, and a host of others, who all have deep technological experience before they became policy people. It's just the other side of the story is that every Belfer center post-grad or "Cyber Law and Policy Professor" with no CS experience of any kind has to leave the field and go spend some time doing bug bounties or pen testing or incident response for a while to get some chops.

But think of it this way, the soccer game's score is 0-12, and not in your favor. Wouldn't you want to change the lineup for the second half?

Monday, April 17, 2017

Fusion Centers


So the Grugq does great stand up - his timing and sense of using words is amazing. But it is important to remember that when I met him, a million years ago, he was not pontificating. He was, as I was, working for @stake and on the side writing Solaris kernel rootkits. Since then he's spent a couple decades sitting in cyber-land, getting written up by Forbes, and hanging out in Asia talking to actual hackers about stuff. My point is that he's a native in the lingo, unlike quite a lot of other people who write and talk about the subject.

Which is why I found his analysis of Chinese Fusion Centers (see roughly 35 minutes in) very interesting. Because if you're building cyber norms or trying to enforce them, you have to understand the mechanisms other countries use to govern their cyber capabilities all the way to the ground floor. It's not all "confidence building measures" and other International Relations Alchemy. I haven't been able to find any other open source information on how this Fusion Center process works in China, which is why I am pointing you at this talk. [UPDATE: here is one, maybe this, also this book]

Likewise, the perspectives of foreign SIGINT programs that the US has decided to Gerrymander the cyber norms process is fascinating. "What we are good at is SUPER OK, and what you are good at is NOT GOOD CYBER NORMS" is the US position according to the rest of the world, especially when it comes to our stance on economic espionage over cyber. This is an issue we need to address.


Saturday, April 15, 2017

VEP: When disclosure is not disclosure.

True story, yo.

I want to tell a personal story of tragedy and woe to illustrate a subtle point that apparently is not well known in the policy sect. That point is that sometimes, even when an entire directory of tools and exploits leaks, your bugs still survive, hiding in plain sight.

A bunch of years ago, one of my 0days leaked in a tarball of other things, and became widely available. At the time, we used it as training - porting it to newer versions of an OS or to a related OS was a sort of fun practice for new people, and also useful.

And when it leaked, I assumed the gig was up. Everyone would play with it, and not just kill that bug, but the whole technique around the exploitation and the attack surface it resided in.

And yet, it never happened. Fifteen years later only one person has even realized what it was, and when he contacted us, we sent him a more recent version of the exploit, and then he sent back a much better version, in his own style, and then he STFU about it forever.

I see this aspect in the rest of the world too - the analysis of a leaked mailspool or toolset is more work than the community at large is going to put into it. People are busy. Figuring out which vulnerability some exploit targets and how requires extreme expertise and effort in most cases.

So I have this to say: Just because your adversary or even the entire world has a copy of your exploit, does not mean it is 100% burnt. And you have to add this kind of difficult calculus to any VEP decision. It happens all the time, and I've seen the effects up close.

ShadowBrokers, the VEP, and You

Quoting Nicolas Weaver in his latest Lawfare article about the ShadowBroker's Windows 0days release, which has a few common thematic errors as relates to the VEP:
This dump also provides significant ammunition for those concerned with the US government developing and keeping 0-day exploits. Like both previous Shadow Brokers dumps, this batch contains vulnerabilities that the NSA clearly did not disclose even after the tools were stolen. This means either that the NSA can’t determine which tools were stolen—a troubling possibility post-Snowden—or that the NSA was aware of the breach but failed to disclose to vendors despite knowing an adversary had access. I’m comfortable with the NSA keeping as many 0-days affecting U.S. systems as they want, so long as they are NOBUS (Nobody But Us). Once the NSA is aware an adversary knows of the vulnerabilities, the agency has an obligation to protect U.S. interests through disclosure.

This is a common feeling. The idea that "when you know an adversary has it, you should release it to the vendor". And of course, hilariously, this is what happened in this particular case, where we learned a few interesting things.

"No individual or organization has contacted us..."

"Yet mysteriously all the bugs got patched right before the ShadowBroker's release!"
We also learned that either the Russians have not penetrated the USG->Microsoft communication channel and Microsoft's security team, or else Snowden was kept out of the loop, from his tweets chiding the USG for not helping MS.

This is silly because codenames are by definition unclassified, and having a LIST OF CODENAMES and claiming you have the actual exploits does not mean anything has really leaked.

The side-understanding here, is that the USG has probably penetrated ShadowBrokers to some extent. Not only were they certain that ShadowBrokers had the real data, but they also seem to have known their timeframe for leaking it...assuming ShadowBrokers didn't do their release after noticing many of the bugs were patched.

And this is the information feed that is even more valuable than the exploits: What parts of your adversary have you penetrated? Because if we send every bug to MS that the Russians have, then the Russians know we've penetrated their comms. That's why a "kill all bugs we know the Russians have" rule as @ncweaver posits and which is often held as a "common-sense policy" is dangerous and unrealistic without taking into consideration extremely complex OPSEC requirements for your sources. Any patch is an information feed from you, about your most sensitive operations, to your enemy. We can do so only with extreme caution.

Of course the other possibility, looking at this timeline carefully, is that the ShadowBrokers IS the USG. Because the world of mirrors is a super fun place, is why. :)




Tuesday, April 11, 2017

"Don't capture the flag"

Technically Rooted Norms


In Lawfare I critiqued an existing and ridiculous norms proposal from Carnegie Endowment for International Peace. But many people find my own proposal a bit vague, so I want to un-vague it up a bit here on a more technical blog. :)

Let's start with a high level proposal and work down into some exciting details as follow from the original piece:
"To that end, I propose a completely different approach to this particular problem. Instead of getting the G20 to sign onto a doomed lofty principle of non-interference, let’s give each participating country 50 cryptographic tokens a year, which they can distribute as they see fit, even to non-participating states. When any offensive teams participating in the scheme see such tokens on a machine or network service, they will back off. 
While I hesitate to provide a full protocol spec for this proposal in a Lawfare post, my belief is that we do have the capability to do this, from both a policy and technical capacity. The advantages are numerous. For example, this scheme works at wire speed, and is much less likely to require complex and ambiguous legal interpretation."

FAQ for "Don't Capture the Flag" System


Q: I’m not sure how your proposal works. Banks pick their most sensitive data sets, the ones they really can’t afford to have attacked, and put a beacon on those sets so attackers know when they’ve found the crown jewels? But it all works out for the best because a lot of potential attackers have agreed to back off when they do find the crown jewels? ;-)

A: Less a beacon than a cryptographic signature really. But of course for a working system you need something essentially steganographic, along with decoys, and a revocation system, and many other slightly more complex but completely workable features that your local NSA or GCHQ person could whip up in 20 minutes on a napkin using things laying around on GitHub.
Also, ideally you want a system that could be sent via the network as well as stored on hosts. In addition, just because you have agreed upon it with SOME adversaries, doesn't mean you publish the scheme for all adversaries to read.

Q: I think the problem is that all it takes for the system to produce a bad outcome is one non-compliant actor, who can treat the flags not as “keep out” signs but as “treasure here” signs. I’d like a norm system in which we had 80% compliance, but not at the cost of tipping the other 20% off whenever they found a file that maximized their leverage.

A: I agree of course, and to combat this you have a few features:
1. Enough tokens that you have the ability to put some on honeypots
2. Leaks, which as much as we hate them would provide transparency on this subject retrospectively, and of course, our IC will monitor for transgressions in our anti-hacker operations
3. The fact that knowing whether something is important is often super-easy anyways. It's not like we are confused where the important financial systems are in a network. 

Ok, so that's that! Hopefully that helps or gives the scheme's critiques more to chew on. :)








Wednesday, March 29, 2017

Stewart Baker with Michael Daniel on ze Podcasts

I want to put a quick note out about the latest Steptoe Cyber Law podcast, which is usually interesting because Stewart Baker is a much better interviewer than most against people like this. He's informed, of course, as the US's best known high power lawyer in the space. But also, he's willing to push back against the people on his show and ask harder questions than almost any other public interviewer.

TFW: "I know shit and I have opinions"

http://www.steptoecyberblog.com/2017/03/27/steptoe-cyberlaw-podcast-interview-with-michael-daniels/

The whole interview is good, and Michael Daniel's skillset is very much (and always was) managing and understanding the physics of moving large government organizations around for the better. His comments on the interview are totally on point when it comes to how to handle moving government agencies to the cloud. Well worth the time!

More to the point of this blog however: 47 minutes into podcast Stewart Baker says, basically, that he thinks the VEP is bullshit, and everyone he knows (which is everyone) thinks the VEP is bullshit. Daniels says about VEP not that it works in any particular way, but that he is a "believer", and, to be fair, his position is "moderate" in some ways. In particular, he acknowledges that there is a legitimate national security interest in exploitation. But he cannot address any of the real issues with the VEP at a technical level. In summary: He has no cogent defense of the VEP other than a nebulous ideology.


Wednesday, March 1, 2017

Control of DNS versus the Security of DNS

"We're getting beat up by kids, captain!"


So instead of futile and counterproductive efforts trying to regulate all vulnerabilities out of the IoT market, we need to understand that our policies for national cybersecurity may have to let go of certain control points we have, in order to build a resilient internet.

In particular, central points of failure like DNS are massive weak points for attacks run by 19 year olds in charge of botnets.

But why is DNS still so centralized when decentralized versions like Convergence have been built? The answer is: Control.

Having DNS centralized means big businesses and governments can fight over trademarked DNS names, it means PirateBay.com can be seized by the FBI. It is a huge boon for monitoring of global internet activity.

None of the replacements offer these "features". So we as a government have to decide: Do we want a controllable naming system on the internet, or a system resistant to attack from 19 year olds? It's hard to admit it, but DNSSec solved the wrong problem.

Tuesday, February 21, 2017

Some hard questions for team Stanford


These Stanford panels have gotten worse, is a phrase I never thought I'd say. But the truly painful hour of reality TV above needs jazzing up more than the last season of Glee, so here is my attempt to help, with some questions that might be useful to ask next time. But before I do, a quick Twitter conversation with Aaron Portnoy, who used to work at Exodus. I mention him specifically because Logan Brown, the CEO of Exodus, is the one person on the panel who has experience with the subject matter.

Aaron worked at Exodus before their disclosure policy change (aka, business model pivot). This followup is also interesting.

Let's take a look at why these panels happen - based on the very technical method of who sponsors them, as displayed by the sad printouts taped on the table methodology. . .

At one point Oren, CEO of Area1, is like "Isn't the government supposed to help defend us, why do they ever use exploits?", assuming all defense and equities issues are limited to one domain and business model, his, even though his whole company's pitch is that THEY can protect you?

The single most poisonous  idea to keep getting hammered through these panels by people without operational experience of any kind is the idea that the government will use a vulnerability and then give it to vendors. The only possible way to break through to people how much of a non-starter this is is to look at it from the other direction with some sample devil's advocate questions:

Some things are obvious even to completely random twitter users...yet never really brought up at Stanford panels on the subject?

  1. What are the OPSEC issues with this plan?
  2. How do we handle non-US vendors, including Russian/Chinese/Iranian vendors?
  3. How do we handle our exploit supply chain? 
  4. Are vulnerabilities linked?
  5. What impact will this really have, and do we have any hard data to support this impact on our security?
  6. Should we assume that defense will always be at a disadvantage and hence stockpiling exploit capability is not needed?
  7. Why are we so intent on this with software vulnerabilities and not the US advantage in cryprtographic-math? Should we require the NSA publish their math journals as well?
  8. What do we do when vulnerability vendors refuse to sell to us if their vulns are at risk of exposure
  9. What do we do when the price for vulnerabilities goes up X 100? Is this a wise use of taxpayer money?

Just  a start. :)


Friday, February 17, 2017

Just cause deterrence is different in cyber doesn't mean it doesn't exist

Are there Jedi out there the Empire cannot defeat?

That's a long title for a blog post. But ask yourself, as I had to ask Mara Tam today: Do we always have escalatory dominance over non-state players in cyber?  I'm not sure we do.

What does that mean for cyber deterrence or for our overall strategy or for the Tallinn team's insistence that only States need be taken into account in their legal analysis? (Note: Not good things.)


That said, Immunity's deterrence against smaller states has always been: I will spend the next ten years building a team and a full toolchain to take you on if you mess with our people and we catch you, which we might. Having a very very long timeline of action is of great value in cyber.

Thursday, February 16, 2017

DETERRENCE: Drop other people's warez

I'll take: Famous old defacements for $100, Alex


I had this whole blogpost written - it had Apache-Scalp in it, and some comments on my attempts at dating, and Fluffy Bunny, and was all about how whimsical defacement had a certain value in terms of expressing advanced capability, and hence in terms of deterrence. "Whimsy as a force multiplier!"

But then Bas came over and pointed out that I was super wrong. Not only are defacements usually useless, but they are not the Way. In most domains, deterrence is about showing what you can do. In cyber, deterrence is showing what other people can do.

The Russians and US have been performing different variations on this theme. The ShadowBrokers team is a 10 out of 10 on the scale, and our efforts to out their trojans, methodologies, and team members via press releases is similar, but perhaps less effective overall.

If you are still on the fence over whether the VEP is a good idea: The Russians can release an entire tree of stolen exploits and trojans because:

  1. Our exploits don't overlap with theirs
  2. Our persistence techniques, exfiltration techniques, and hooking techniques that we use in our implants, where they are not public, don't overlap with theirs.
  3. Or maybe they filtered it out so techniques they still use don't get burnt?


Tuesday, February 14, 2017

Cover Visas

There is absolutely no steganography in this picture of a fire!

So the problem with making it so the only way to get from Iraq to the US is being a cooperating asset is that you put our asset's families at risk. We need a huge amount of people who got green cards purely on a lottery or from extended family chains so when we want to offer someone an "expedited magical spy green card" we can, and his/her family won't get automatically kneecapped.

This is one of those strategic dillemas. What if it's 100% true that there's someone bad coming in, because why not? It may literally be impossible to vet people at the border. But if you NEED a permeable border to accomplish building your local HUMINT network, and without one you are completely blind in-country, you may have to just bear that risk?

At some level, building cover traffic is important, and also one of the most difficult things in SIGINT. Keep in mind as far as anyone can tell, public research into stegonography died as soon as digital watermarks clearly were not the answer to DRM for the big media labels - for the simple reason that the way to remove any theoretical digital watermark on a song is to mp3 encode it.


Saturday, February 11, 2017

The TAO Strategy's Weakness: Hal Fucking Martin the Third



I want everyone to watch the video above, but think of it in terms of how to build a cyber war grand strategy. 21-year-old aggressive-as-fuck me thought that the whole strategy of TAO was stupid. But I couldn't say why because I was all raw ID the way 21 year olds all are. "Scale is good" people intuitively think  - we need to be able to do this with a massive body of people we can train up.

40 yo me has proposed an insane idea - as different from the way we do things now as a Eukaryota is from the Bacteria and Archaea that we evolved from. I cloak it in "hack back" or "active defense", but the truth is that it stems from a single philosophy I've held my whole life, one that dates to when TESO and ADM were ripping their way through the Mesozoic Internet.

It is this simple phrase: You should not use the exploit if you cannot write it. The truth is, I cannot write the exploits that Scrippie writes. But I for sure understand them. Let that be our bar then - a nucleus composed of small teams of people who understand the exploits they are using, but don't share them or any of their other infrastructure with other teams.

We talk a little bit about dwell time here. But we are now in an age when the dwell time of a hacker in your system who doesn't have full access and analysis and exfiltration of your data is zero. How does your strategy of "hunting" handle that era? And this applies to our and other country's cyber offense teams more than anywhere else. We have a knife made out of pure information and all the SAPs in the world can't save us with the current structure we have.

In summary, how many separate exploit and implant and infrastructure and methodology chains do we really need to obtain dominance over this space? "So many", as Bri would say.

Friday, February 10, 2017

Shouting into the void *ptr;

Getting old people off Office is less a technical problem than a political one.


So a couple other hackers with deep expertise in exploitation and offensive operations and I often go to a USG policy forum which will remain unnamed and we propose strange things. One of those strange things can be best titled: Insecure at any price, the Microsoft story.

What this means is exactly what you're seeing in the latest EO: Get off Microsoft on your desktop. You cannot secure it. Despite Jason Healey's obsession with innovations from Silicon Valley, sometimes you have to say: There are things we cannot build with.

I will list them below:

  • Microsoft Office (Google Docs 100 times better anyways)
  • Microsoft Windows
  • OS X
  • PHP
  • ASP (ASP.NET good, old ASP bad)
  • Ruby on Rails (not sure how they made this so insecure, but they did)
  • Sharepoint. NEVER USE SHAREPOINT. It's a security nightmare because XSS exists.
  • Wordpress.
But it is also true about protocols. SMTP needs to be almost no part of your business. If you regularly use SMTP and email in your business structure, you are failing, and we already have replacements in the messaging space that do everything it does, but better. 

Imagine two hackers sitting with policy lawyers and we say "Use Chromebooks, Use iPads" and that's what you're reading in the latest EO. That's how you solve OPM-hacking type issues. Of course, it is likely to simply be a coincidence. You never know where the info from these policy meetings ends up. It is only slightly more substantive than literally shouting into the void.

Tallinn 2.0 is the Bowling Green Massacre of Cyber War Law


Above is the Atlantic Council livestream of the Tallinn Manual 2.0 launch. Look, no-one can deny that Mike Schmitt is a genius, but the Tallinn Manual is more mirage than oasis. Let me sum it up: They can't AGREE on whether the Russian IO work on the US Election was anything in particular, and they already acknowledge that they don't have solidity on what state sovereignty means in cyberspace. In other words, wtf does the Treaty of Westphalia have to do with information warfare, if anything, is still an unanswered question, no matter how many of "the best lawyers in the world" you put in a room in Tallinn.

Literally, that means that despite his opening statement at EVERY EVENT HE'S EVER AT, the Internet is literally an ungoverned space, with a sort of militant "rule of the strong" applying at best. That's what the Russian efforts this fall mean.

That doesn't mean his efforts are wasted - The US DoD and other states LOVE a manual that can allow them to rationalize their actions, and that's why this is on the desktops of specialist lawyers across the space. Right now CYA in cyber costs fifty bucks on Amazon. Deep down, if you can't agree on the lines or definition of anything, then you don't have a process that produces consistent results.

"We captured all reasonable views and put them in the manual." What is this, the Talmud of cyber war?

But these are just my opinions (and yes, they are shared among the high level International Law specialists in this space I've talked to at the pool), and the hard part of this release is how little criticism processes like this have. These sorts of events are love-fests, not working groups.

Monday, January 30, 2017

The Data is Nowhere

Jennifer Daskal posted a note over on justsecurity which pointed out the continuing hilarious struggle the court system is dealing with when it comes to data and the nature of data.

To simplify the legal question beyond recognition, they are trying to answer whether or not the US Government can subpoena Microsoft for data it holds in the cloud, ostensibly in Ireland.

Microsoft's preferred answer is "No." and it "won".


I want to point out this is just one of many many places where our legal system is failing because data is nowhere. If I'm a cloud provider I can split the data with an XOR and store it in two completely different places. Or the data can (and usually DOES) move to the closest location it has been accessed recently, or all the closest locations. Or I can store the data in Vanuatu, with an agreement that they won't be subpoenaing me under any circumstances.

These are not scenarios for clever legal minds to pile spaghetti-language on to try to define their way around the problem. Since the first networked file system was installed on a Unix cluster, the only thing we've known about data is we have no idea where that data is really stored.

This is the ongoing continued schism between our legal system and reality: The Tallinn cyber policy lawyers, as their very first axiom, indicated that data must be stored somewhere geographically, and therefore the laws of war were simple to apply to it! We try to define our rules around surveillance based on if a IP Packet (or phone call) is foreign or domestic. We have the Rule41 extension, which runs into this very same issue. These are all examples of an inability to recognize the truth: The Data is Nowhere.

Even more importantly, the computation is nowhere. A real computer, and there are perhaps five in the world, is a massively distributed system, making parallel computations in what is almost an organic fashion. We have yet to have good examples of how this affects our legal system, but it will, and not in the ways we would like.

Tuesday, January 17, 2017

The Atlantic Council Paper

The Atlantic Council released a new paper on cyber security strategy from Jason Healey: PDF Link

Video Introduction Panel:


You can learn a lot from watching the video, and most importantly that in Jason's worldview, the attackers are the Other. The video itself, like many of these panel discussions, is largely people agreeing with each other.

They start the discussion by talking about strategies of the past and how easy they were to summarize. Two examples below:

Containment: A one word strategy ftw!

COIN: Kill the bad guys, win the hearts of the good guys.

But more truthfully, if you had to draw COIN into a memo it would be "Know everything about everyone". In WWII we used to send these spotter planes out into the ocean, which "happened" to come across shipments which we then sunk. The goal, obviously, was to have the ships radio back "SHIT WE'VE BEEN SPOTTED", and protect our real source, which was breaking their crypto and knowing their exact path. Drones are the same thing. They're the scary face of our surveillance, the frontman built mostly out of Java middleware. But they're just like those spotter planes - sent to give you something to fear that's not the real boogeyman. In other words, our COIN strategy is our cyber strategy, mostly redirection and slight of hand.

But yes, when we take a hit, we realize that what we need is a mixed martial art for cyberspace. And that's what this paper SHOULD be.

In the early 2000's, I sat in Harlem starting Immunity and also helping build the early version of our cyber war strategy (and in particular how to glue CNE to IO, which is what Assange was figuring out as well).  Quickly we realized we needed to understand how humans form groups in the internet age. Only one professor has interesting things to say about that as far as I can tell, and that's Clay Shirky. You'll want to read his blog and his book. He was a visionary around all of this material which at the time had very little traction in the computer security world, but if you specialized in offense (as we did) you could see how important it was because so many of our instinctual reactions in defense are wrong.

In Jiu-Jitsu, the first thing you learn is that many of your built in instincts about protecting yourself will get you tapped out. In particular, when someone is sitting on your chest, and you push them off you with your arms, you immediately get your arm broken (armbarred).

The crowd that argues that we should always "lean towards defense" in the cyber policy world likes to use vulnerability discovery as their demonstration for how we should create policies that do that. In particular they think our use of unknown vulnerabilities should be highly limited, and any we find should be immediately given to the vendor. This establishes an outstretched link of information from our signals intelligence arms to our adversary, which is as good an idea as reaching out your arm to a BJJ fighter sitting on top of you.

You can not "lean" in any one direction. In fact, rather than "offense" and "defense" a better mindset is understanding what you control and what you do not, just as in jiu-jitsu. Is ubiquitous stronger crypto leaning towards defense or is it preventing defense (because you cannot look into and filter traffic?).  Advantage in this space is not linear, and the core argument of the paper is overly simplified because of that.


Let's talk more specifically about the paper's arguments:

Note that Dept of Commerce funding is on a trendline down, from 10.2 to 8.5 Billion USD. Dept of Defense is at $521B or so. Let's just say TWO ORDERS OF MAGNITUDE BIGGER. But more than that, the mission of the Dept of Defense is to be a giant software and IT company. They connect millions of people as a matter of day to day survival, and always have.

But part of the reason for why the DoD is a center of gravity in cyber policy is simply that power in cyberspace is maybe best defined as "We know things that you don't." The IC is a natural fit. Commerce is not.

This entire paper is mostly about item 6 on his list. Immunity gave a whole talk in 2011 on why this is misleading, but we will go over how this paper handles it in depth in this blogpost. It's interesting that despite the fact that Jason is from Columbia University, his worldview is directly rooted in Silicon Valley. :)

I have been on the offense for two decades and I can say one thing about it: The grass is always greener on the other side of cyberspace. While every defender, including this paper, laments that the field is tilted towards offense, offensive teams know that you only have to be caught once to lose your entire toolchain, a toolchain that was going bad faster than tomatoes left out in the Florida sun, except a million times more expensive.

You think the NSA wants to be writing and maintaining an entire toolkit that trojans the microcode inside hard drive controllers? They do this because they are at a disadvantage, not as a show of strength.

Let's examine his argument for why Offense > Defense a bit:
Attackers have had an easier time than defense, owing to at least four key failures: Internet architecture, software weaknesses, open doors for attackers, and complexity.

In particular, he claims that internet protocols were designed without security, software has bugs and there are not really market incentives to produce secure code, the cliche argument that attackers always attack the weakest point whereas defenders have to defend all points, and that the interactions between all sorts of our processes on the internet are so complex they cannot be reasoned about and hence defended.

Skip down to Page 30 where he tries to address our issues with cyber defense with strategic countermeasures.



The questions in this paper demonstrate how little we often know about this space before trying to make major policy decisions. The Wassenaar debacle this year is an example of us trying to "lean towards defense" and look where that got us.

Check out the wishful but patriotic thinking in the following paragraph, clearly written before the election happened:

Remember that time Apple told China to go to hell when they asked it to remove LinkedIn from all Chinese iPhones? OH THAT ISN'T WHAT HAPPENED?!?


William Gibson famously said about the future that it was not universally distributed. The paper suggests various ways we can get technology from silicon valley that will help us with this whole defense problem:


Generally the goal of this effort is to accomplish these three things, according to the paper:

  1. Secure Cyberspace as a Means to Advance Prosperity: First and foremost, US policy must ensure cyberspace and the Internet advance US and global prosperity, not least through continuous and accelerating innovation. Other priorities are important, but subordinate.
  2. Maintain an Open Internet to Support the Free Flow of Ideas
  3. Secure US National Security in and Through Cyberspace: (aka, spy)

Look at these policies in the language of "control" rather than "defense" and you'll see that a policy that "leans towards defense" is a thin cover for the desire more nakedly espoused by the outgoing NSC of controlling the entire vulnerability market, maintaining an open internet is a thin cover for trying to control other country's internets and "preventing balkanization".

This is essentially an ideology of complete control. A defensible internet is a totalitarian playspace for big software and media companies that somehow ignores the fact that China wants to censor the Falun Gong out of existence.

Ok, how do we create one of these playspaces, according to the paper?


  1. Issue a New Strategy Prioritizing a Defense-Dominated Cyberspace
  2. Improve US Government Processes on Cyber
  3. Sow the Seeds for Disruptive Change
  4. Develop Grants to Extend Nonstate Capabilities
  5. Regulate for Transparency, Not Security
  6. Long-term Focus on Systemic Risk and Resilience
  7. Look Beyond a Security Mindset to Sustainability


Basically we're going to hope Silicon Valley drags our ass out of the fire if we give them more money to "innovate"?

So, as I'm often criticized for simply criticizing, here is my counter-plan:

If we do need a motto, then it needs to be: Acknowledge that Cyberspace is Different.


  1. Immediately depreciate protocols and products that are as under-water as a Miami Beach house: specifically IPv4, email, Microsoft Office, Microsoft Windows. This is the one thing we could do immediately that would drastically change our defensive posture. 
  2. Fire the head of any agency when any massive data breach impacts the operations of it, up to and including DIRNSA
  3. Revise the clearance system which is old and brittle and not working well for anyone at this point other than Russia and China 
  4. Normalize and address the fact that foreigner's packets and data are identical to domestic packets and data. Nothing in our law and policy handles this at the moment. We clearly have to specifically revise title 10 vs title 50 issues as opposed to monkey-patching it with "legal understandings".
  5. Dominate the information battlespace including in the Law Enforcement area by giving the NSA and CIA room to work (i.e. no more VEP that "leans towards defense" but is just for PR) and building a national mobile forensics center.


This plan is better specifically because it works by controlling ourselves, and not trying to extend our control to the entire software ecosystem and internet.

In other words, we cannot wait for Silicon Valley to come up with a way to secure Microsoft Windows and our old way of doing business: We need to accept a new way of doing business on ChromeBooks and iPhones and other hardened devices that cannot run Microsoft Office or be Phished.

To make it a motto: My problem with Jason Healey's paper is that he proposes we wait for the future to secure us. But the future is now, if we want it.


-----

Ok, as a P.S.: This is the craziest idea in the paper. I mean, I like that he's thinking about metrics, but it's an example of a way of thinking that is as carbonized as Han Solo.

That doesn't mean there isn't work to do, but that work needs to be spent building an internet that is immune to the effects of botnets, not trying to combat the existence of botnets themselves.

Tuesday, January 10, 2017

How do you handle a bug class drop?


So one thing to ask yourself is whether your organization can handle the discovery (or public release) of an entirely new bug class. For example, when Format Strings became known, people adjusted their source code analysis tools, software development lifecycles including their COMPILERS, inventory systems, and entire understanding of classes of vulnerabilities. Not to mention all the offensive teams that need to jump on this sort of thing.

We talk often about how private entities know more bug classes than you do. But few people have any level of preparation for when the next bug in libc comes out.

Sunday, January 8, 2017

The CSIS Paper Review - Part 1

So the CSIS paper shines when it gets a bit "salty", in the parlance of the times. In many ways the INTRODUCTION of the paper is its best part, which is rare.


"Turning to technologists didn't work" - I wonder if this was written by a lawyer! :)

This section is the best section in the whole paper, and worth a deeper look. Because all of these papers are the same, be they from MIT/Brookings/Stanford/CSIS or the team I'm working with. They all look at where we are, realize it was not a huge success (which frankly, several months ago was not consensus), and then try to determine a GRAND BARGAIN that can break the logjam we're in and move the needle.

Many of these groups think that moving the needle means "Securing the whole internet", which is a conceptual trap they've fallen into. But every group seems to know that without dealing with everything holistically, you are getting nowhere. That means we have to actually come to an agreement on gnarly domestic issues, such as encryption, warrants, and liabilities, to international relations issues such as what it means to go to war over the internet.

And there is, somewhere, a core of agreement between all of the policy groups positioning themselves on this issue.

The easiest way to judge these papers is to look at where they stand on a few clear issues that I've selected as tests:

  1. How do they prevent the next OPM
  2. How do they prevent the next "electoral hacking"?
  3. What harsh truths do they admit, in particular, do they admit we are going to have strong crypto on phones one way or the other, and what are they going to do about it?
  4. How do we protect Jordan in cyberspace since we need them to project our power in meat-space?
  5. Do we have any answer whatsoever to ransomware?
For the first one, which is legitimate espionage on one hand, but something we need to defend ourselves against, it's clear the answer is not in the thicket of "deterrence", which always drags every discussion towards "this is someone else's problem, maybe the military's, maybe the State Dept, but not mine, for sure?" 

The Federal CISO's Role


Federal policy types (stereotyping here to annoy Mara), as in that CSIS introduction, often see a CISO role or CIO's role as "manage the IT stuff to make it secure so I can run my business/administration". Nothing could be further from the truth. A CISO's role is to manage what your business is. They don't tell you what computing infrastructure you need to have a branch office in China securely; they tell you you can't have a secure branch office in China. 

And this is where the policy people with deep expertise in federal structure can really lend value in this process: Tell us the organizational innovations that can make it possible to manage the Information Security of the federal government in all its complexities. Where non-technologists go wrong is in trying to set policy in a space they cannot predict tomorrow in. And where technologists go wrong in these papers is in trying to suggest policy solutions that don't work in the current management miasma of the federal government. 

But we need both: A federal government that is unmanageable in the information security sense is unmanageable in any sense in the modern world. Eight years from now, one way or the other, the federal government will have a biometric record of every person in the States, or who has ever been in the States.  And if your Cybersecurity Agenda for the 45th President can't get us there, then it needs to be reworked.

Risks

Let's move on to what I consider some debatable prospects. I don't think many of these papers are really meant to be read for content, so much as a collection of resumes applying to have influence and a statement of worth, but it's still worth doing:

attack->attacks (I do all the proofreading for you). Also interesting how Risks are measured in dollars here mentally, and I'd caution that stealing the RIGHT billion dollars worth of information can have strategic effect larger than the monetary value...

Ask yourself if any reasonably sized penetration testing team (NCCGroup, for example) could have done the attacks against our electoral process that resulted in our recent Russian Sanctions. Even the small players in this field do similar attacks EVERY DAY. And somehow policy teams continue to insist that the greatest risk is from attacks whose effect is equivalent to use of physical force? Nothing could be further from the truth. This weird fetish for "equivalent to physical force" is an example of people who are not comfortable with the cyber domain. 

Is the internet just some machines routing packets? "They exist in physical space somewhere!" you can hear the Tallinn philosophers opine. Or is it a software layer where no particular request is routed to any particular storage center, as Microsoft would inform you if you ask them. 

But this is why when lawyers, especially those with backgrounds in the law of war, try to project the future, they fail at seeing the risks right in front of them. The only actors capable of the most damaging attacks are nation states? Yet 90% of politico was Julian Assange this year, and as much as people try to make him out as a Russian Stooge, he's something else even more annoying to our worldview - a non Nation-State actor.

Reread this document with an eye that non-nation-state actors already have the capabilities they assumed they would not "for the next few years" and you'll come to different conclusions.

The Security Umbrella



misunderstand->misunderstandings
This is a two-pronged question but deep down WHO do we have doing a more formal approach to building security and stability is the hard part. We tend to focus on really big countries, like Brazil and India and China and Russia but equally important are Jordan and Israel and Singapore and Argentina.

How do you extend your security umbrella to your allies? What does that even mean? These are hard questions and I try to read all these papers for ideas around that.

In some senses, this is similar to our domestic problem of sharing information with industry partners, but sharing information doesn't help you unless you also share actions, as we've learned.


Encryption



Every single working group on this subject wants to finally get over the encryption issue and has come out against backdoors or any legislative solution. Law Enforcement is going to have to deal. In the long run, we need to remove crypto from export control as well. I'm not making this as an argument here, only pointing out that every single working group producing these papers says the same thing in slightly different ways. They should probably be more explicit with what the FBI would do with more dollars, and include state and local police in the solution. But we've gone over that before.

This particular paper comes out against active defense as well, which is worth discussing later, but at least they have a position and section on it. :)

Conclusion

These sorts of papers represent a lot of work, and it's interesting that they don't get ripped up a bit more - possibly because I overthink them. But regardless, if you're one of the authors and you disagree feel free to ping me and I'll amend this in place or add a section on why I'm wrong. MAYBE MORE BEST PRACTICES FROM NIST IS EXACTLY WHAT WE NEED! :)





Saturday, January 7, 2017

"Zero Day===Totally Gnarly"

So RDanzig sent an email to someone I'm working with on a policy paper and he corrected a term "Zero Day" to be "Zero Day Exploit vs Zero Day Vulnerability". This insistence on broken terminology is common among a certain set of policy people and it's a bit laughable.

"Zero Day" does not have a technical meaning, despite any Rand papers to the contrary, and the honest truth of it is that it is synonymous in the technical community to "Totally Gnarly". In your head, replace "Zero Day" with "Totally Gnarly" when reading a paper by any of the policy teams and they'll make equal amounts of sense.

I want to, of course, focus on the recent CSIS paper, which we've all read by now. It has a broken section on "Zero Vulnerabilities", which at first I read as similar to "Zero Inbox", but turns out to just be their West Coast team not knowing that it's "Zero Day" and then trying to put extremely dangerous policy ideas into their paper, seemingly without any internal peer review process?


A legally enforced code of conduct for all security researchers? Imagine the fun of trying to get that working when we can't even agree on basic principals around the subject in 40 years of trying. NIST, which had the NSA backdoored random number generator debacle and lost all industry trust, is going to "Gather best practices" on vulnerability handling? Is that really something we need? NO. GIANT WASTE OF TIME IS WHAT IT IS. The US Government can't even get CVE working properly without a brouhaha and that's just about counting bugs, like the most basic biology lab on Earth.

Mandate publication of security assessments? I'm sure every vendor will sign right up for that and that won't cause any problems. This whole thing was written by a bug bounty vendor who wants the contract for a federal bug bounty program. It has no ideas worth using, and what REALLY should worry you, is there are a lot of super smart people who worked on this CSIS report, and none of them read this section closely enough to even correct the title from Zero Vulnerabilities to Zero Day Vulnerabilities which is what I assume they meant.

There's some good stuff elsewhere in the report, but why didn't anyone even bother to read this section? How can we trust the other sections went through an internal peer review process?


Wednesday, January 4, 2017

Targeting Cyber Whales and Catching Cyber Minnows


President Obama has been criticized for being too weak in his response to Russia’s interference in the US presidential election. But I would argue the opposite. They actually set a risky precedent which has been unexplored in the policy space.

What I want to point out here is that the White House miscalculated when it leveled sanctions against Russian private contractors, in addition to the GRU members responsible for the operation. Singling out Russia’s intelligence officials and state operatives for punishment of this nature is fine; it’s a limited move, and relatively ineffective, but it’s well within our rights and at least it sends a message. But private individuals should be off limits even when their technology and know-how is used in operations we do not like. If Trump’s administration plans to roll back any part of Obama’s sanctions, it should be those.


sanctions.PNG
"Technical Research and Development"? "Specialized Training"?


The question no one in the policy sect seems to be asking is: Do we really want our own private contractors singled out and targeted by foreign powers? Is that a ‘norm of behavior’ that is in our best interests? How are cyber operation responses, which share a lot of similarities to criminal prosecutions, different? Nearly the entirety of the US Information Security industry has taught a class at /Training/Etc in Columbia MD at one point or another. Our current sanctions action puts them all on the plate for Russian retribution. Not to mention our Anti-Virus industry is heavily populated with technical experts directly from APT1, now working to defend our systems. Strategic disruption of our adversaries means getting closer to, not further from, their teams of hackers. In many cases these contractors may have been working for the Russian government under duress. Can we judge their motivations along with their efforts?

Cyber Security Strategy is all about the Lemmas and Dilemmas

Regrettably, the US response to Russia’s cyber operation faced serious dilemmas from the start. For instance, how do we achieve a deterrent effect on future efforts by Russia and other nations, while at the same time prevent the confrontation from escalating into an actual “cyber war” or threatening our partnerships abroad, particularly in Syria? Additionally, how do we avoid exposing our sources and methods within the highest levels of Russia’s government? We have attempted to solve these issues by relying on sanctions, which are an easy PR win - a NY Times headline series waiting to happen. But targeting sanctions or criminal prosecutions at small contractors, no matter what their involvement, is a long-term strategic mistake without appreciable benefit.  


This is an issue that needs to be considered very carefully, not only in terms of how it affects current operations, but also how it could limit our capabilities in the future. For instance, this precedent will make it extremely difficult to involve America’s private security community in “active defense” missions in the future, which is a key area of reform the next President should be reviewing.


Another question worth asking is, if private contractors are now fair game, could forensics firms such as CrowdStrike or Mandiant or other AV firms also be targeted for making “false allegations” about a specific country’s involvement? Also, is it possible the research community could be targeted for vulnerability discoveries which are later used by state-sponsored or criminal groups to carry out attacks?


These questions may seem far-fetched now, but we can’t underestimate the potential for an adversarial nation like Russia to use whatever means are available to make its point or redress grievances. Using US policy and precedent against us is a likely action by Russia. There's a reason you use nation-state policy efforts against nation-states instead of criminal law - otherwise you make all former TAO members responsible for TAO's mission, which is not well loved outside of the US.


The small companies and individuals running those companies may well be deeply involved with the DNC hack and related operations, but deterrence efforts around sanctions may require that we are able to make a public and convincing case regarding their guilt. Without that ability, they can easily deny their involvement, and our efforts look misguided at best. Of course, targeting individuals has the other side effect of pissing them off personally, and small groups of individuals with grudges and high levels of capability are very hard to deter by a nation state.

The Obama administration should be credited for its strong focus on cybersecurity issues during the last eight years. However, it has relied too heavily on the threat of broad-based sanctions for deterrence. This strategy worked well with China, but Russia is a different story and the Obama administration knows it - hence, the current sanctions are mostly about PR, not achieving a real strategic win. Going forward, the US needs to develop stronger and more diverse capabilities for response which will allow us to create real deterrence among all of our enemies, without resorting to counterproductive policies that are more PR than substance.


---
More Resources:

  • Jake talks a lot about this as well.
  • Alisa's postings in the community are well known, but here are some: Slideshare, Phrack
  • From an effectiveness and image perspective, releasing “indicators of compromise” is a fairly amateur thing to do. While it works for Crowdstrike and Mandiant and other commercial entities, the USG has better things it could do. In particular, these signatures were of rather low quality (See Robert Lee’s report as well), which makes us look bad, not scary, the opposite of what we are trying to do.
  • Sanctions from a historical standpoint