Companies that make high assurance software - programs whose failure means catastrophic consequences like, a billion dollars disappears, or the rocket ship blows up on the launch pad - are adopting technologies that are a couple of years ahead of the mainstream. When you ask a Trail of Bits engineer about what’s happening, you’re talking to someone who is already operating in the future. In this bonus episode, Trail of Bits engineers discuss trends they are seeing now that the rest of the industry will see in the next 18 to 24 months.
Dan Guido is the CEO of Trail of Bits, a cybersecurity firm he founded in 2012 to address software security challenges with cutting-edge research. In his tenure leading Trail of Bits, Dan has grown the team to 80 engineers, led the team to compete in the DARPA Cyber Grand Challenge, built an industry-leading blockchain security practice, and refined open-source tools for the endpoint security market. In addition to his work at Trail of Bits, he’s active on the boards of four early-stage technology companies. Dan contributes to cybersecurity policy papers from RAND, CNAS, and Harvard. He runs Empire Hacking, a 1,500-member meetup group focused on NYC-area cybersecurity professionals. His latest hobby coding project -- AlgoVPN -- is the Internet's most recommended self-hosted VPN. In prior roles, Dan taught a capstone course on software exploitation at NYU as a faculty member and the Hacker in Residence, consulted at iSEC Partners (now NCC Group), and worked as an incident responder for the Federal Reserve System.
Nat Chin is a security engineer 2 at Trail of Bits, where she performs security reviews of blockchain projects, and develops tools that are useful when working with Ethereum. She is the author of solc-select, a tool to help switch Solidity versions. She worked as a smart contract developer and taught as a Blockchain Professor at George Brown College, before transitioning to blockchain security when she joined Trail of Bits.
Opal Wright is a cryptography analyst at Trail of Bits. Two of the following three statements about her are true: (a) she's a long-distance unicyclist; (b) she invented a public-key cryptosystem; (c) she designed and built an award-winning sex toy.
Jim Miller is the cryptography team lead at Trail of Bits. Before joining Trail of Bits, Jim attended graduate programs at both Cambridge and Yale, where he studied and researched both Number Theory and Cryptography, focusing on topics such as lattice-based cryptography and zero-knowledge proofs. During his time at Trail of Bits, Jim has led several security reviews across a wide variety of cryptographic applications and has helped lead the development of multiple projects, such as ZKDocs and PrivacyRaven.
Josselin Feist is a principal security engineer at Trail of Bits where he participates in assessments of blockchain software and designs automated bug-finding tools for smart contracts. He holds a Ph.D. in static analysis and symbolic execution and regularly speaks at both academic and industrial conferences. He is the author of various security tools, including Slither - a static analyzer framework for Ethereum smart contracts and Tealer - a static analyzer for Algorand contracts.
Peter Goodman is a Staff Engineer in the Research and Engineering practice at Trail of Bits, where he leads all de/compilation efforts. He is the creator of various static and dynamic program analysis tools, ranging from the Remill library for lifting machine code into LLVM bitcode, to the GRR snapshot/record/replay-based fuzzer. When Peter isn't writing code, he's mentoring a fleet of interns to push the envelope. Peter holds a Master's in Computer Science from the University of Toronto.
An accomplished information and physical security professional, Nick leads the Software Assurance practice at Trail of Bits, giving customers at some of the world's most targeted companies a comprehensive understanding of their security landscape. He is the creator of the Trail of Bits podcast, and does everything from writing scripts to conducting interviews to audio engineering to Foley (e.g. biting into pickles). Prior to Trail of Bits, Nick was Director of Cyber Intelligence and Investigations at the NYPD; the CSO of a blockchain startup; and VP of Operations at an industry analysis firm.
Story Editor: Chris Julin
Associate Editor: Emily Haavik
Executive Producer: Nick Selby
Executive Producer: Dan Guido
Rocky Hill Studios, Ghent, New York. Nick Selby, Engineer
Preuss-Projekt Tonstudio, Salzburg, Austria. Christian Höll, Engineer
Whistler, BC, Canada; (Nick Selby) Queens, NY; Brooklyn, NY; Rochester, NY (Emily Haavik);
Toronto, ON, Canada. TAPES//TYPES, Russell W. Gragg, Engineer
Trail of Bits supports and adheres to the Tape Syncers United Fair Rates Card
Edited by Emily Haavik and Chris Julin
Mastered by Chris Julin
You can watch a video of this episode.
DISPATCHES FROM TECHNOLOGY'S FUTURE, THE TRAIL OF BITS THEME, Chris Julin
OPEN WINGS, Liron Meyuhas
NEW WORLD, Ian Post
FUNKYMANIA, Omri Smadar, The Original Orchestra
GOOD AS GONE, INSTRUMENTAL VERSION, Bunker Buster
ALL IN YOUR STRIDE, Abe
BREATHE EASY, Omri Smadar
TREEHOUSE, Lingerwell
LIKE THAT, Tobias Bergson
SCAPES, Gray North
With the exception of any Copyrighted music herein, Trail of Bits Season 1 Episode 0; Immutable © 2022 by Trail of Bits is licensed under Attribution-NonCommercial-NoDerivatives 4.0 International. This license allows reuse: reusers may copy and distribute the material in any medium or format in unadapted form and for noncommercial purposes only (noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation), provided that reusers give credit to Trail of Bits as the creator. No derivatives or adaptations of this work are permitted. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
Chris Julin has spent years telling audio stories and helping other people tell theirs. These days he works as a story editor and producer for news outlets like APM Reports, West Virginia Public Broadcasting, and Marketplace. He has also taught and mentored hundreds of young journalists as a professor. For the Trail of Bits podcast, he serves as story and music editor, sound designer, and mixing and mastering engineer.
For the past 10 years Emily Haavik has worked as a broadcast journalist in radio, television, and digital media. She’s spent time writing, reporting, covering courts, producing investigative podcasts, and serving as an editorial manager. She now works as an audio producer for several production shops including Us & Them from West Virginia Public Broadcasting and PRX, and APM Reports. For the Trail of Bits podcast, she helps with scripting, interviews, story concepts, and audio production.
NARRATOR (NICK SELBY): A long time ago, Penn Jillette - the loud half of Penn & Teller - said something like, “If you could predict the future, to a degree even the slightest percentage over chance, you’d be working on Wall Street; you’re not gonna be bending spoons on television.”
His point was that seeing into the future, for real, gives you practical intelligence on what’s coming - so you can take action to prepare for it.
Companies that make high assurance software -- programs whose failure means catastrophic consequences like, a billion dollars disappears, or the rocket ship blows up on the launch pad - are adopting technologies that are a couple of years ahead of the mainstream. When you ask a Trail of Bits engineer about what’s happening, you’re talking to someone who is already operating in the future.
So when they tell you about their day, it’s a pretty safe bet that what they’re saying is something you’re going to read about one day in the future.
NAT CHIN These platforms that allow you to take out flash loans have essentially allowed attackers to call a smart contract and get their hands on hundreds of millions of dollars in funds for 10 seconds.
NARRATOR: That’s NAT CHIN
NAT CHIN - I’m a security Engineer II at Trail of Bits
That means that an attacker suddenly has an insane amount of funds that they can use on a system. And if exploitable, then they’ll use that money to essentially sweep the system…and steal everything.
NARRATOR: She recorded that comment about a week before what she said happened to a DeFi exchange exactly as she said it would.
In the spring of 2022 we asked several Trail of Bits security engineers what they’re concerned about - and what they’re expecting - in the next 18 to 24 months, based on what they see right now.
Their answers were varied, but we heard a few common themes as we spoke with engineers across our three main focus areas- Cryptography, Blockchain, and Application Security.
One thing everyone mentioned was how the industry looks at memory safe languages - and their potential to make developers feel a little TOO safe.
OPAL WRIGHT people say, Oh, we wrote it in rust, and therefore, Rust is memory safe. So there can't be any bugs in it. Well, that's bulls**t…
NARRATOR: OPAL WRIGHT is a cryptography analyst at Trail of Bits.
Here's memory safety in simple terms.
If a computer language isn't memory-safe, it’s easier to manipulate memory as a backdoor into a system, to crash it or worse.
A memory-safe language is designed to close this loophole. There are a number of memory-safe languages — Rust, Haskell, Golang and a bunch of others. A memory-safe language reduces the chance of an attack.
OPAL WRIGHT - …It prevents certain types of bugs from happening.
NARRATOR: But Opal worries — many of our engineers worry — that developers have gotten a false sense of security from memory-safe languages.
OPAL WRIGHT - Logic bugs happen.
NARRATOR: Here’s Opal again:
OPAL WRIGHT: If you're not checking for certain conditions with the keys that you're getting, if you're not checking for, you know, the point at infinity, when you're verifying a digital signature, guess what, you're still going to fail. The fact that you wrote it in Rust doesn't mean you know what you're doing.
NARRATOR: Shake a software executive or developer awake in the night and demand to know about their testing regime, and they will regale you with stories of their testing. Unit testing, static testing, even dynamic testing.
Building code that's "safe" requires you to know what "safe" even means - but the future of developer education is a real departure from its current meaning.
Here’s Trail of Bits CEO DAN GUIDO:
DAN GUIDO - The future is about the business saying, "This is what we expect the program to do - tell me if it ever does anything different." Anything that varies from what is desired? That’s a bug.
NARRATOR: The future of testing isn't about the tools you use, it's about what you test and why; it’s verification driven testing.
DAN GUIDO - When we talk about ‘developer security training’ today, we’re trying to teach developers how to avoid writing code that’s susceptible to a never-ending and always growing list of attacks. That’s impossible. The best teams go the other direction: they tell their developers to write code and list out the specs, invariants, properties: what specifically should this code do? Then we let tools find any places in the code that allow unintended things - bugs - to happen. That's the future we're in.
NAT CHIN - What does your code depend on?
NARRATOR: Here's NAT CHIN again.
NAT CHIN - How do you expect a user to interact with your system?
NARRATOR: Trail of Bits has leaned in to building application invariants into testing; this used to be hard, and is now required. We’ve found that these things work really well on cloud native software because they are built in Service Oriented Architectures; lots are in kubernetes; lots of them have reproducible builds.
DAN GUIDO - This has always been the case for cryptography code but now we are able to do it on cloud native software, smart contracts, Layer 1 blockchains. This allows us to refocus on what is important to the business in the code.
NAT CHIN - Is it possible for a user to bypass the way in which you intended the system to be used? What are the expected inputs into a smart contract, what are the expected behaviors of the smart contract and what is the expected output?
NARRATOR: This concept - gathering expected inputs and outputs - is what people who care about safety do, and what everyone will be doing soon. This is different from “testing” or the classic advice to, "understand the condition of your assets". It’s “Understand your invariants: does your code support things you don’t intend it to?”
JOSSELIN FEIST - For example…
NARRATOR: Here’s Blockchain Team Lead JOSSELIN FEIST:
JOSSELIN FEIST - …if you have a token, an invariant could be that no user in the system should have more tokens than the system can supply.
NARRATOR: We’ve all seen the consequences of failing to answer those questions - we see it with every major hack of a cryptocurrency exchange or smart contract - which happen at an increasingly regular pace: The flash loan breach we mentioned earlier - the theft of $180 million in Beanstalk Farms tokens - that was a consequence specifically of not doing this.
Nat’s advice? Follow the old Soviet legal model, which basically said, ‘That which is not required, is forbidden.” Verify that the things you want to happen will always happen, and the things you want not to happen never happen:
NAT CHIN - You need to know what your code does, and you need to be able to have some, In the case of DeFi, preferably mathematical formula that you can compare against, so that you can actually check that the code behaves as you expect.
NARRATOR: Breaking programs down into smaller chunks lets you test them more precisely. That’s the only way that seemingly impossible verification problems get slightly less impossible. Nat has some basic advice, informed by scores of audits:
NAT CHIN - Start small. Once you have an idea of what you want to build, building a minimal product and feature set of that code base and sticking to your scope as well as applying unit testing and fuzzing to make sure that code behaves as expected and then audit and slowly iteratively add new features to that code base.
NARRATOR: Here’s Trail of Bits CEO DAN GUIDO again:
DAN GUIDO - It’s the same thing with "missing or incomplete test regimes" -- people can have great tests, but we're in a world now where security testing means something a lot more formal than it used to. Where instead of relying on system-level invariants like "does it crash" (memory unsafety) you need to test for application-level invariants (logic flaws)
NARRATOR: The clear message from the future: consider how you assemble the software you create, with an eye to writing code - and its supporting documentation - that is more easily testable, so that you can use the right tests for the job.
We're living through a cryptocurrency gold rush. Like every gold rush in history, there are certain givens, and one of those is that people move fast to get in on the action. One impact of this that our cryptography team has noted has been a shrinking of the analysis window - that is, there's been an increase in speed to market that is driven not by new technology, or better practices, but by greed and worse practices. Simply put, there's a reason that it takes so much time to approve cryptographic standards: getting cryptography right … is hard.
OPAL WRIGHT - Historically, the time between publishing some sort of new cryptographic tool and seeing it deployed in a security critical system has been a lot bigger than it is right now.
NARRATOR: The trend we see here comes in two parts. First, investors and CEOs working on systems that leverage cryptography continue to try to push the boundaries of “what is safe,” to mean, “It delivers the features we want, so it’s good.” Again and again, they push those boundaries until there's a catastrophic failure.
JIM MILLER: So the problem is that CEOs and other decision makers seem to consider only the upside that newer cryptography can deliver.
NARRATOR: Trail of Bits Cryptography lead, JIM MILLER.
JIM MILLER: But they often don't account for the inherent risk associated with using these newer, less battled-tested protocols -- this really applies to new technology generally, but the costs of getting crypto wrong can be far higher.
NARRATOR: Second, because of the dynamics of the Blockchain world, when the catastrophic failure does come, it’s often dismissed by those remaining standing - the company that failed did it wrong, but that won’t happen to us. It’s the nightly news: bad things happening to other people.
When NIST was looking to standardize its Advanced Encryption Standard (AES), they announced the candidate algorithms in August of 1998, but they didn't pick a winner until October of 2000, and they didn't publish a standard until November of 2001.
OPAL WRIGHT - That's a three year gap, during which there was a lot of analysis, a lot of effort put into analyzing the candidate algorithms and then eventually just focusing on Rijndael, which was the the winning algorithm. And even with all of that, four years later, Dan Bernstein published an attack on what was the standard implementation at the time.
NARRATOR: The attack shook the industry, which scrambled to find ways to avoid this kind of thing in the future. Opal mentions that this attack is one of the reasons that, to this day, a lot of processors include certain instructions for AES.
OPAL WRIGHT - There's a similar thing going on right now with the post quantum standardization effort, that NIST is running. The proposals were published in December of 2017. As we're recording this, it's April 2022 and it's still going. But just last month, four years into the process, somebody published an attack against one of the finalist algorithms, and it absolutely demolishes the security.
NARRATOR: So a larger window - a window that is open for a longer period of time - is a responsible and good thing to have. It takes time for cryptographers to get their head around the math, around the specifics of the implementations. This isn't something you can just throw together. And yet...
OPAL WRIGHT - Unfortunately, blockchain is really shrinking that window. Things like threshold signature schemes, some of the zero knowledge proof schemes, they get published in one issue of a journal and before the ink is dry on the next issue of the journal, the software has been written to implement what was originally published and it's out the door and some cryptocurrency startup is using it to protect hundreds of millions of dollars.
OPAL WRIGHT - But I think the answer is to be conservative … to wait for analysis to happen to give that larger window … Give it a little time …wait for the dust to settle on the analysis.
NARRATOR: And these tools, like threshold signature schemes, are so complicated to develop that lots of people find it easier to just use an existing library to get them done, rather than developing it themselves. This can be smart if it comes from a good team that knows what they are doing, but it can also make the blast radius of a bug in those libraries more impactful - because more people are using them
Opal’s counsel for those in the Blockchain business - what the best companies do - is do good math.
OPAL WRIGHT - I think at some point in the next couple of years, we're very likely to see somebody making a billion dollar bet on a piece of math that turns out to suck. If you're doing cryptocurrency, if you're doing ZK rollups, if you’re doing these side chains, if you're doing zero knowledge systems, if you're doing multi party computation, if you're doing these threshold signature schemes, you need to have experts on your team who actually know what they're doing.
NARRATOR: Because even though these tools aren't new, and they’ve been the topic of extensive academic study for years, people are now translating them for the first time into software. Translation mistakes are popping up all over the place.
JIM MILLER - There's quite a lot of effort put into writing and peer reviewing these papers,
NARRATOR: Here’s Trail of Bits Lead Cryptographer JIM MILLER:
JIM MILLER - and, although mistakes do happen from time to time, we can be reasonably confident in their correctness. But this peer review is largely from a purely theoretical perspective, making sure things like cryptographic security proofs are correct; they focus less on things like missing implementation details and misleading notation. They're not implementation guides, they are theoretical protocols. Translating theoretical protocols to software is hard.
NARRATOR: And we need good math now more than ever - because applications are getting more complex.
And since we're talking about the rush to adopt software quickly -- with narrower and narrower windows for testing -- let's turn to our Blockchain team. All those conditions are business-as-usual in the blockchain industry.
JOSSELIN FEIST - My name is JOSSELIN FEIST, and I’m a principal security engineer at Trail of Bits. We're seeing two types of categories on smart contracts. The first ones are related to issues that are common flaws, things like re-entrancy or unprotected upgrades.
NARRATOR: The fact that re-entrancy is still making its way into smart contracts is a problem similar to what Wendy Nather has called The Security Poverty Line, and what Trail of Bits CEO DAN GUIDO calls, “Security Haves and Have Nots.”
DAN GUIDO - We're seeing a bimodal distribution of people in the blockchain space. On one hand, there are teams with 2017-style integer overflows, and on the other there are teams with extensive properties and verified code. There's no one in between.
NARRATOR: The difference between the “Haves” and the “have nots” in this case is five years of progress.
NAT CHIN - Those mistakes were very easy to make because of the nature of solidity and the nature of a language that doesn’t really prevent you from making these mistakes.
NARRATOR: Now, this isn’t some academic statement or compliance-driven “best practice”; those living in the past are forgoing hard-won and often painful lessons.
Especially with smart contracts, which don’t have system-level invariants - it’s not just a matter of, say, fuzzing and it’s all good. You can’t use “Crashing” in a smart contract as a proxy for a “Bad Outcome” - it is much more about specifying what “good” looks like.
We’ve just heard from several software security engineers about what they see as trends, so it’s worth taking a moment to consider the high level message they’ve given us:
In the next two years, making mainstream software will become dramatically more complex. This isn’t just about the high assurance stuff, it’s all software. We expect our code to do much more, and that functionality comes from modern techniques that allow us to include ever increasing external services and libraries.
We’ve passed the time when engineers could eyeball their code or run simple unit tests to see if everything works - and let me be clear: we have no skin in this game, because we don’t sell testing-gear. You need all the testing regardless of whether you ever do an audit with us or anyone else.
DAN GUIDO - You don't need static testing or dynamic testing or fuzzers.
NARRATOR: DAN GUIDO puts it like this—
ACT DAN: You need to know what your code should do, and you need to build it so you can test for that. Applying the correct testing technique then becomes easy. I think we really need to define what kind of testing we are talking about -- this cost of testing vs not testing thing will not land. People test their code today. They will insist they do. The difference is 'test against what' and then 'test how'.
NARRATOR: We have one more idea that our engineers brought up when we asked them to look into the future: Compilers. We've already talked about writing code. We also need to talk about compiling it. And we can't say this too strongly: new ways of compiling software are the future.
PETER GOODMAN - Compilers are used every day by every company around the world for taking their source code and turning it into machine code, which runs on servers.
ACT: My name is PETER GOODMAN and I’m a staff engineer at trail of bits.
NARRATOR: Peter is looking really closely at compilers, because the world of compilers is changing fast, too.
PETER GOODMAN - I do a lot of work on where compilers are today, and where compilers are going in the future, because I’d say we’re on like a, I like to think like a 20 year cycle of compilers.
NARRATOR: His look at the last 20 or so years of compilers starts with the GCC - the GNU Compiler Collection - which most agree is an excellent compiler suite, very popular in the Unix and Linux open source worlds.
PETER GOODMAN - GCC was always thought of as this beast of a compiler in some ways, like it's open source, everyone can edit it, everyone can look at it and stuff. But it always had this reputation of being challenging to modify..
NARRATOR: In the early 2000s, LLVM came along. Oh, one thing about these names - I said "GNU" before, which famously is a recursive acronym that stands for "Gnu's Not Unix". In much the same way, LLVM really just stands for LLVM now, because its early name, "Low Level Virtual Machine" confused everyone since they really weren't describing what most people think of today as a "Virtual Machine" - don't get us started on names.
So LLVM, born at the University of Illinois, evolved to kind of fit behind the GCC. Eventually it got adopted by Apple, which now uses LLVM as the primary compiler for all its hardware and software.
This was partly because GCC came with licensing restrictions that required a company that extended GCC to contribute the extension to the open source community. For organizations looking to build out new competitive advantages, that was a non-starter. The LLVM licensing restrictions were far more lenient…
PETER GOODMAN - So somebody like Apple could start working on compiler backends that would target their custom hardware that they hadn't told the world about yet. And they would never need to reveal any of this stuff until, like, suddenly the new iPhone is released and it's like, Oh, there's this A4 chip … surprise!
NARRATOR: That mix of commercial and academic sensibilities led to dominance.
PETER GOODMAN - It democratized compilers. Suddenly there was this compiler technology that academics could get behind, because it was created by academics and it was open source; industry could get behind it because they could develop it for their own niche architectures. You had this huge investment from academia and industry just pushing this technology further and further and further and further.
NARRATOR: Today's compiler technology operates at a level that's somewhat similar to where the machine is operating. So it's able to do very good optimizations that match what machines can do:
PETER GOODMAN - We need a representation that's closer to the source code and then maybe we need another representation that's somewhere in between the source code and somewhere in between this machine representation.
NARRATOR: He’s talking about intermediate representation. Ways that source can be represented in a way that is somewhere between the source code and the machine code.
Imagine a spectrum: the source representation is close to the language itself. You'd want to represent the language constructs in a way that is close to how the language itself represents them. So, LLVM is very close to the machine, representing things in terms of instructions. But now let's introduce some, "other dialect" -- something that's perhaps, "close to a simplified C:" it has some structure, but it doesn't have all the bells and whistles of the original language. That would be an intermediate representation. And Peter thinks that’s where everything is headed.
PETER GOODMAN - I think the future is a technology called MLIR. It stands for Multi Level Intermediate Representation, and it's hugely popular in the machine learning industry for optimizing all their workflows and computations. But I think it's the future general purpose compiler technology system or framework as well. It's got a lot of investment from academia and industry.
NARRATOR: Big programs have traditionally meant tons of work to analyze, so the shift to MLIR means that this analysis will become simpler. This will empower developers.
PETER GOODMAN - And is it the best thing there is? Is it the best sort of way of representing code? Maybe, maybe not. But what it has going for it is exactly what LLVM had going for it over the last 15 years, which is it's got a lot of momentum from academia and industry.
NARRATOR: The people who worked on this podcast are Emily Haavik, Chris Julin, DAN GUIDO, OPAL WRIGHT, NAT CHIN, Henrik Brodin, Fredrik Dahlgren, JOSSELIN FEIST, PETER GOODMAN, David Pokora, and hi, I’m Nick Selby, I’m the Director of the Software Assurance Practice here at Trail of Bits.
Chris Julin made our theme music.
Trail of Bits helps secure some of the world's most targeted organizations and devices. We combine high-end security research with a real-world attacker mentality to reduce risk and fortify code. We believe the most meaningful security gains hide at the intersection of human intellect and computational power. Learn more at trailofbits dot com or on twitter, AT trailofbits; DAN GUIDO’s Twitter account is AT dguido, and I’m AT fuzztech.