Princeton University

School of Engineering & Applied Science

Specifying, Verifying, and Translating Between Memory Consistency Models

Speaker: 
Daniel Lustig
Location: 
Engineering Quadrangle J401
Date/Time: 
Thursday, November 5, 2015 - 3:00pm to 4:30pm

Abstract
Modern computer processors are actually collections of multiple independent processing units called "cores". These cores often operate independently from each other, but when they do need to cooperate, they do by sending messages through a block of memory that is accessible by all of the cores. For example, in order for one core to send a message to another, the sender will first write data into memory, and the receiver will then read that data back out of memory.
In nearly all modern processors, accesses to memory may be delayed or rearranged into a more efficient order that enables the processor to run faster. Although it does improve performance dramatically, this rearranging has the unfortunate side effect that it can interfere with the carefully-orchestrated patterns that cores use to communicate with each other. Most processors therefore define what is known as a memory consistency model: a set of rules defining what kinds of reordering and buffering are or are not allowed. Memory consistency models have proven to be notoriously difficult even to write down, let alone to fully understand, and this has led to much confusion and numerous hardware and software bugs over the decades.
My thesis proposes new techniques for specifying memory consistency models and for verifying that hardware actually follows its own memory consistency model rules. MOSTs and ArMOR provide a framework for better describing how the individual pieces that form a memory consistency model behave. PipeCheck provides a novel theoretical framework and freely-available software for modeling how the inner structure of a processor causes the memory consistency model to come about in the first place. These techniques simplify the process of working with memory consistency models, and they allow computer architects to more carefully analyze the increasingly-diverse types of hardware that we are seeing now and will be seeing in the future.