Clark Wood

Very briefly, what was the DARPA Cyber Grand Challenge?
The Cyber Grand Challenge was a DARPA challenge where teams competed to find bugs in software. Humans inevitably introduce bugs to software and, because of that, software is very expensive and difficult to write. This creates a situation where the first to market didn’t necessarily write the best software. So, we end up with buggy code, but that’s how you win in a marketplace: you cut corners to be first and get market share, hoping customers will get locked-in and stay with you because of sunk costs.

Bugs result in two things. First, pure accidents where, for example, a rocket crashes because of an integer overflow. They also, more insidiously, allow for what we think of as hacking nowadays. Hackers find bugs and turn them into exploits. Basically, the claim of the Cyber Grand Challenge is that bugs exist in software and they’re very hard to find so instead let’s train machines to find the bugs for us. It would be nice if we could constantly run a program on AWS to find bugs for us all of the time. The goal of Cyber Grand Challenge was to see if they could build technologies to leverage program analysis expertise to find bugs in software and automatically patch them. There are several open research problems and several engineering problems that make this very difficult.

What was the key innovation behind the DECREE program?
We’ve had informal Capture the Flag competitions in computer security for a while now, the pinnacle of which is DEFCON CTF. The way those competitions worked in the past was very artisanal. People would have their own homebrewed tools but for the most part, someone who already knew how to do it would carefully, painstakingly craft a problem in a CTF. You would then find what the problem was and manually write up exploits. Everything was craft-like. The first advantage of DECREE was to give everyone a common base from a reproducibility point of view. We wanted to make things more repeatable and easier to compare. DECREE acts as a platform from which you can produce further experiments. The other advantage of DECREE was that, because DECREE is different from the Linux or Windows environments, exploits in DECREE don’t work in the real world. There’s less cause for concern because the output of these challenges wasn’t something that could be thrown at real systems.

What drew you to pursue a degree in policy after working as a researcher on the technical side?
Working on the technology side of things is deeply rewarding because there’s a real feeling of accomplishment when you first “get it”: when your code is finally compiling and running correctly or you first manage to crash a binary. But, even though it’s very satisfying, it’s also often divorced from benefitting the world at large. Policy is squishy; it lacks the rigor of science and the elegance of math, but you have real world impact that’s really difficult to duplicate. For me, transitioning to policy is an attempt to have more immediate impact, particularly because science and technology is hard and it’s especially hard to advocate for. A lot of times, despite the government’s best intentions, they make the wrong decisions when it comes to science and technology and I would like make sure that happens less.

What within policy are you currently focusing on?
My current research is on policy opportunities for formal methods. Formal methods can guarantee that a given computer program possesses a specific property. An example property might be: “this program does not contain a certain type of bug”. So instead of having software with n bugs, where you don’t know how large n is, you can have strong assurances that n is actually 0. Formal methods, in some form, have been around for a long time; type checkers are an example, but they have really advanced over the decades. Nowadays we have entire microkernels with formally proven memory integrity, meaning that, for example, there are no kernel buffer overflows. It’s not that we’re taking a calculated risk. We know specific behavior is not going to happen. There will always be things like side channels or phishing but it’s a significantly better world than the one we’re in now. We’re starting to see leaders in industry use these techniques now in specific domains. The goal of my current research is to show that it will soon be reasonable for the government to incentivize the use of formally verified software or even punish companies for not using it when it comes to certain devices.

What are communication barriers in conveying next-generation research out there to our government?
Decisionmakers in government only have so much time, and probably don’t have the expertise necessary to understand some of the questions we’re facing. Scientists could also do a better job of engaging policymakers and people more broadly, but I think that sometimes scientists get blamed for being poor communicators, when really the problem is that decisionmakers either can’t or won’t understand. So both sides could do better, and I don’t want to fall into a false equivalency, but I don’t know whether one is “more to blame” than the other.

One of the coolest efforts I’ve seen is scientific outreach: scientists teaching children and getting people to learn more about these fields. Science and government are in many ways siloed. Getting constituents more involved and knowledgeable will trickle up to policymakers.

Another issue is that scientists who are acting in good faith often times disagree. When I talk to my coworkers about formal verification, one of the things I’ve heard them say before is that “there will always be bugs”. That’s absolutely true but my goal isn’t to rid the world of bugs, but to incrementally improve the state of software. Scientists are like anybody else and compete for the same small pots of money, prestige, etc. If you look it as a marketplace for ideas, then this competition is good in many ways because, in theory, the best rise to the top. But scientists are people as well and you can run into petty squabbles. Neither of those things are easy to fix. And you have to think of things in terms of alternatives. Just because the government is siloed and silo-ing has problems doesn’t mean that there’s a better viable alternative. Maybe if you had even more collaboration, nothing would get done. But it does feel sub-optimal.

How have your current studies in the policy sphere informed the way you now build technology?
The major thing that has changed over the years for me is my appreciation for usability. Especially in more academic environments, people focus on novelty: developing new algorithms or new software for finding bugs. They’ll spend lots of time doing a great job on this but all the amazing work they did goes to waste because they didn’t talk to users, verify that what they did was necessary, or make it accessible. This is broader than policy but it’s a big problem. Even when people come together and agree it’s important, they still argue that it’s not their job. You see this in government as well. Something will get built and it was either the wrong thing to build or wasn’t made available to people or it was made available and you didn’t listen to what they need you to do for round two. Separation of labor is important, but people will try their best to avoid “unsexy” work, especially if it’s not only unsexy but also unsung, unappreciated. Somebody needs to write the GUI, and to talk to users that don’t understand or care about how your algorithm is more elegant, but are going to use what you built.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s