Technology

It is possible to design a computer system that can’t be hacked.

October 5, 2016  • Howard Shrobe

Key Points

  • The director of cybersecurity at MIT's Computer Science and Artificial Intelligence Laboratory, explains how we can work to strengthen our own cybersecurity.
  • We have the ability to re-engineer our current computer systems to be able to keep up with modern-day hackers.

Howard Shrobe, the director of cybersecurity at MIT’s Computer Science and Artificial Intelligence Laboratory, is speaking at Securing Our Future: Cambridge Cyber Summit.

Cyber attacks seem to occur nearly every day. Most people assume that this is an inherent consequence of our reliance on modern computer systems and networks.

The normal reasoning goes something like this: “Computer systems are big, complex systems of unprecedented scale. Vulnerabilities are an unavoidable consequence of this complexity and we are stuck with them.”

We erect perimeter protections such as firewalls that provide vital but limited protection and we patch and pray.

In this view, the best we can do is manage the risk and try to limit the consequences. We erect perimeter protections such as firewalls, that provide vital but limited protection, and we  pray. Yet major breaches continue. In effect, we’re stuck in cyber hell, and we need to learn how to make the best of it.

I would like to offer an alternative, more optimistic view. First, it is important to take a more historical view of the problem. The way in which we architect computer systems has it roots in the 1970s, when the Unix operating system was developed, along with its system programming language, “C,” which is still used for most system programming. Computer hardware of that era was slow and memory was very expensive. Nothing mattered as much as squeezing out every ounce of performance possible. One consequence of this is that instead of enforcing critical properties at the most basic levels of the system possible (e.g., in the hardware or in the programming language), we instead left it to programmers to get everything right on every line of code. Furthermore, computer systems of that era weren’t ubiquitously connected to networks that offered access to attackers from around the world. So security wasn’t even a secondary concern: It wasn’t a concern at all.

Between then and now, Moore’s Law has allowed a steady improvement in computer performance; systems today are more powerful on all metrics by a factor of about 50,000.

In short, the architects of those systems made perfectly reasonable engineering trade-offs for their world, but our world is very different. Between then and now, Moore’s Law has allowed a steady improvement in computer performance. In keeping with Moore’s prediction that computing capabilities will increase in power exponentially, systems today are more powerful on all metrics by a factor of about 50,000. They are all networked, and they are entrusted with critical functions. Yet we still use architectures appropriate to an earlier era. To paraphrase Einstein, everything changed – except for our way of thinking about how to design computer systems.

The good news is that we can re-engineer these systems for today’s needs, removing entire classes of vulnerabilities at a time. What we need to think about are the architectural principles that would govern secure designs. There are a handful of these that cover most of the vulnerabilities that exist.

The critical question today is not whether we can design safer computer systems, but really one of creating the right incentives for systems like these to become the new mainstream of computing.

These errors can be eliminated completely in many programming languages. It’s also possible to enforce controls on the integrity and flow of information. And this is true, even if programmers make coding mistakes.

In short, we have demonstrated that it is possible to design a modern computer system that attackers can’t break into and that can protect our information. The critical question today is not whether we can design safer computer systems, but really one of creating the right incentives for systems like these to become the new mainstream of computing.

We don’t need to live in cyber hell. But we do need to accelerate the transition to a new generation of computing systems that are inherently safe and resilient.

This post originally appeared on CNBC.

Related
National Security
Call Security
September 1, 2016 • Zach St. Louis