Mike Dodds

What drew you to formal verification?
The thing that’s exciting to me about formal verification is that you can use it to understand precisely what a piece of software will or won’t to do, without testing it. Software, in general, is regrettably unreliable, and the formal verification can give you strong guarantees that your software really does what you want and that you’re not going to end up being surprised by unwanted bad behavior.

Given that formal verification can provide you with all of these strong guarantees, which is something we want in software as a whole, why do you think formal verification isn’t used in a more mainstream manner?
One reason is that until recently the cost of fully verifying a piece software has been high. For example, INRIA’s CompCert project verified a whole compiler from C to machine code. That kind of effort is impressive, but it was very expensive, and you have to be sure what you’re verifying is worth the cost. One of the things we’re doing at Galois is building verification tools that have more automation and are much, much more lightweight.

I think another reason that verification can be difficult is because developers often don’t know exactly what they want. Let me refine what that means. When you verify something, you have a specification that says what properties you want from the software. You then bridge the gap to the software itself using automatic theorem proving, symbolic execution or some other technique. However, just writing down what software should do is often a challenge. Let’s say you try to verify a web server. A web server does a lot of things and some of those things are quite fuzzily specified.  Fortunately, it’s now becoming much easier to apply verification techniques because we have better techniques for writing down what complex software ought to do.

Security, particularly cryptography, is a good domain for verification because often the cryptographic behavior you want is written down very clearly, for example in an RFC. Verification can tie the connection between crypto software and its specification in a very precise way, giving you software with verifiable security guarantees. This is what our SecDev tutorial will be about, in fact.

As a researcher in concurrency, could you elaborate on how it adds new complexities to verifying systems?
When you verify a single threaded program, what you often do is prove the properties that are true after each step in the program. In concurrency, because you’re running many different threads that could be affecting one another, there’s a lot more nondeterminism so that’s much harder. When one of your threads takes a step, all of your other threads could have taken one, two or many steps. The space of possibilities is just much, much larger.

How has your transition from academia to working at Galois changed the way you see communication between academia and industry?
I was an assistant professor in England up until 2017, when I moved to Galois. My interests have always been at the interface between theoretical verification ideas and building tools. Part of the reason I wanted to come to Galois was because I wanted to not just build prototype tools, but also to transition them out to more industrial contexts. Galois is a research consultancy, which means that the Galois team is very academic by background, but we’re mainly interested in applying those techniques to solve practical problems. For me, a big reason I wanted to move was because I got to help people and improve reliability and security in the real world. That’s not to say that academia is a bad thing. There’s an ecosystem where academics work on fundamental ideas. Pure research is the basis of all of what we do at Galois and it’s really important that there are academics out there exploring those fundamental ideas. It’s also important that there are people like us who turn those fundamental ideas into practical prototypes motivated by problems in industry. I enjoyed being an academic and I enjoy being at Galois. Both of them are valuable parts of the ecosystem.

What is an area of research that if you had more time, you’d want to dig more deeply into?
This is obvious I guess, but machine learning is very exciting right now. We’ve been doing a little bit of work on applying machine learning techniques to verification problems. The classic approach to verification is that you build a solver based on a set of hand-written rules, and the rules prove things about the program. However, some of my colleagues had a paper in PLDI this year where they basically said, “Suppose we take one of these solvers and plug it into the machine learning system.” If you do that, it turns out the rule-based solver becomes radically more effective. It’s exciting that you can use machine learning to perform reasoning that would either require human judgment, or where your verification would normally fail because it’s not smart enough to converge on an answer. We’re currently building another prototype tool for verifying distributed systems which will hopefully be ready by the end of the year.

Hopefully if things are easier to verify, they’ll become more usable and more widespread.
Ease of use and producing effective feedback is very important in making verification practical. Something I’m excited about is integrating verification and static analysis tools into developer workflows. People have started to think much more about this over the last five years. Previously, there was a tendency to build a static analysis tool, analyze some open source code, get some bugs, and just email the developers. The problem is they usually know about the bugs, or the bugs are not realizable in practice. So that’s the naïve way of approaching that problem. More recently people have instead tried to find bugs at the point they’ve just been introduced to the code base. Something that Galois has done (and Amazon, Google, Uber, Facebook, and others) is that every time you make a commit, you run the static analysis and it points out the new bugs. This is a smart because it fits directly into developer ergonomics: a commit is the point in the workflow where if you draw attention to a bug, it’ll get fixed. This is exciting to me because it isn’t a change in technology—we’re getting better at understanding how to integrate tools in the developer workflow to help people make more reliable software.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s