Microarchitecture Security Roundtable Report

Conference room with video wall for remote participants and a large table with researchers sitting around with laptops

The 2017 Spectre and Meltdown vulnerabilities showed the world that microarchitecture security problems are real. Given the unique nature of microarchitecture security, topics and research areas discussed covered many aspects of discovery and mitigation.

In order to address this important topic, the Collaboratory hosted a roundtable event on Microarchitecture Security on April 27, 2018 at Red Hat’s Executive Briefing Center in Boston, MA. In attendance physically and virtually were researchers and developers from Boston University, Red Hat, and the Graz University of Technology.

We had three main goals for this meeting:

  1. A prioritized list of topics and research areas involving microarchitecture security discovery and mitigation
  2. Determinations on a per-project/topic basis potential integrations with Red Hat or other upstream projects so the technology has a long-term, upstream home
  3. A list of assigned action items and next steps for the upstreaming process

We met these goals during the course of the event as you’ll learn in this post.

Kickoff / introductions

Jon Masters kicked off the event by emphasizing the impact that the Meltdown and Spectre vulnerabilities had on computing in general and the opportunity the Collaboratory has in engaging in research on the topic so that the technology industry is better prepared for the next vulnerability.

Next, we did introductions around the room.

Affinity mapping microarchitecture security topics

Roundtable participants gather around sticky notes with topic ideas, voting on and discussing them

This event followed a design thinking protocol. We created an affinity map of topics, ideas, and questions around finding the next side-channel attack before it happens. Participants were invited to share their ideas and every idea was posted up. Next, the group categorized all of the ideas into categories. Each participant was given a number of votes they could put towards each idea.

Here is the list of categorized topics the group created, with the number of votes indicated in ():

The groups discussed what existing and/or potential new projects best related to the topic. Below are discussion summaries per group.

Photo of yellow post-it notes on a glass wall, organized into categories.

Topic: Side channels / covert channels

For this topic, there isn’t any concrete upstream project — the topic is more research-oriented. We want some outcomes for the research side; here are 2 student project ideas around mitigation at the software level:

Project 1: Hackathon for masters-level students

(Skillsets: Undergrad/Masters students with a background in architecture OS. Mentor would be someone who could provide a one-hour overview of the space. EbbRT folks or Rich West with Quest could be good people.)

What are the simplest possible things you could do to mitigate a cache-inspection type attack (speculative execution attack)? Exploring this would give us a baseline for the simplest possible things we could do at the software level to mitigate these types of attacks. It could involve 7-8 students prototyping to see what the easiest and quickest solutions are. Maybe one of these would turn out to be competitive with some of the hardware-level mitigation. Non-determinism is a theme — how do you make things more deterministic?

Project 2: Summer internship

(Skillset: Likely requires Ph.D.-level and compiler background.)

Can you annotate within an OS kernel (Quest, e.g.) which pieces of data are more or less sensitive to speculative execution? How do you allocate whatever cost you pay as a defender to make sure you know which things you want to protect the most? This could involve using heuristics based on the compiler we use. Is there something in the optimizing workflow that could be useful for automatically inferring the sensitivity of certain types of data vs. other types?

There are industry researchers who have done a lot of research in this area, for example: any time a secret is loaded, the OS knows it, and they can treat it accordingly. It’s a matter of tracing what happens to data that gets copied from userspace to see what happens to it as it gets passed around in the kernel.

One of the problems with speculation is that you have to look at all the non-normal code paths — where might a piece of data be loaded, even if the program flow doesn’t tell you when to load it. If you can annotate a piece of data “I don’t care if this data is public,” you treat it one way, if not you treat it another way. You would like to be able to trace what executed speculatively.

Other points:

  • This might be relevant to Uli’s thought on unikernels? We should be able to create a stripped-down build of Linux that is basically a unikernel, to use for security purposes.
  • Where would the data be annotated about data privacy level? SELinux tagging? Only certain functions can access the Linux kernel “__user” tag data in userspace. You might be able to do the same thing with kernel data: “this data is never allowed to be in control of flow at all.”

Topic: Hypervisors, supervisors, their role in detecting exploits

There were two project ideas for this topic area:

Project 1: Detecting exploits

Side channel attacks should leave some kind of footprint, maybe in performance, maybe somewhere else. It could be interesting to run existing workloads to see if there is a way to develop a tool to check if an exploit is installed and working.

If an exploit is running in a VM, we can shut down the VM or move it to a separate host so it is air-gapped. You could dynamically keep migrating “possibly infected” VMs so that they never have time to really execute the exploit on the hardware.

What research is there for detecting these things? People have had the ideas of using performance counters for detecting these things – cache flushes or cache evictions are interesting, although not all chips have counters for this. Branch predictor stats or branch miss rates are another good example.

Project 2: Isolate differences between non-malicious and malicious code

To detect something malicious, you need to try to run some non-malicious stuff and try to collect stats so that you can isolate differences with malicious code. We had a very rudimentary implementation of this in OpenShift for example.

Roundtable participants meeting in a group discussion.


There were four project ideas for this topic area:

Project 1: Test bed lab

This would involve the creation of an environment with a population of different vendor hardware where the vendors have an interest in researchers having early access to hardware. UNH, MGHPCC, and Red Hat are all possible participants. Sample applications could be available there – Oracle, RHV, HyperV.

Project 2: Build research hardware

We could take existing concepts and build real hardware that can actually be used for micro-architecture security research. Some RISC-V hardware has cache coherency; we could build actual hardware to study this.

Project 3: “Principled Architecture” for Field Programmable Gate Array (FPGAs)

This involves having the design philosophy, going in, of being aware of security vulnerabilities; this is a similar idea to project #2 above, but would involve close industry partnership with a hardware vendor or vendor(s).

Project 4: Formal methods to describe and verify the use of data in a kernel.

It is at least possible to declare whether a piece of data is important or whether it could be leaked safely. This project could also involve tracking the provenance of data. Harvard Dataverse is potentially interesting here – Data provenance in the hardware, data provenance around datasets.

Field Programmable Gate Arrays (FPGAs)

There was one project idea for this topic:

Project 1: Build a shell around FPGAs that can execute OS and application code

How do we build a shell around FPGAs such that we can run OS code, application code, executing on them?

Someone with real kernel knowledge from Red Hat could supervise. The direction the hardware is going is to have the ability to have a bunch of virtual functions defined to make one FPGA look like several FPGAs.

Having a “virtual wire” through the FPGA is very interesting but also has its challenges.

Summary & Further Resources

Participants from the event are currently investigating the feasibility of the above project ideas that came out of this event, and the Collaboratory is hoping to provide some follow ups on the progress of that work – which projects came to fruition, and their progress – in future postings here.

The following two resources served as background reading for this roundtable; if you are interested in further investigation of this topic you may find them useful:

Special thanks to Collaboratory member David Cantrell for facilitating this roundtable session and providing the photos for this post.