The Multicore Algorithmics group headed by Prof. Nir Shavit develops techniques for designing, implementing, and reasoning about multiprocessor algorithms, in particular concurrent data structures for multicore machines and the mathematical foundations of the computation models that govern their behavior.

Faculty


Prof. Nir Shavit
Quote: “”
My main interests are techniques for designing, implementing, and reasoning about multiprocessor algorithms, in particular concurrent data structures for multicore machines and the mathematical foundations of the computation models that govern their behavior. My research these days is directed at the use of randomness and combinatorial techniques in concurrent algorithm and data-structure design. I am also interested in parallelism in the Brain and what we can learn from it towards the design of futuristic computing machines.

Postdocs


Zablotchi Igor Zablotchi
My research focuses on the theory and practice of concurrent algorithms. I am most interested on the impact of emerging technologies, such as RDMA and persistent memory, on the way that we build distributed systems. I am also interested in algorithms for scalable machine learning on multicore machines.

Graduate Students

Past Members

Alexander Matveev
Quote: “People who are really serious about software should make their own hardware.”
I am working on practical and efficient synchronization techniques for multi-core systems. In particular, my focus is hardware and software transactional memory designs and implementations.
Rati Gelashvili
Quote: “inveniam viam aut faciam”
I am working on shared memory algorithms and data-structures (full of exciting mathematical challenges). I am also very interested in extending topological framework for distributed computability in various directions.
Justin Kopinsky
Quote: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are–by definition–not smart enough to debug it.”
I am primarily interested in concurrent data structures with a focus on using non-blocking algorithms and other methods to achieve scalability. I’m also currently working on producing a model for Hybrid Transactional Memory which is motivated by real architectures but is rigorous enough to allow us to prove interesting lower bounds.
William Leiserson
Quote: “Any technology that is distinguishable from magic is insufficiently advanced.”
I work on concurrent data structures, structures that scale efficiently with the number of cores operating on them. I’m especially interested in the application of hardware transactional memory (HTM) to problems that use locks to guarantee mutual exclusion. HTM has the potential to be lighter-weight than a lock, especially if there is low contention, and lock-freedom has desirable progress guarantees.
Michael Coulombe
Quote: “For the mind does not require filling like a bottle, but rather, like wood, it only requires kindling to create in it an impulse to think independently and an ardent desire for the truth.”
My research interests include provable algorithms and data structures, with applications from concurrency to computational biology, and the methods of expressing and proving them. I am currently working on concurrent, shared memory data structures

Dan Alistarh

Quote: “”An algorithm must be seen to be believed.”I’m mainly interested in distributed algorithms. My main research topics are the complexity of concurrent data structures, and communication in distributed systems.

Jerry Li
Quote: “wow such math much theory”
I am broadly interested in understanding the core of what is computable, in both the sequential and distributed settings. I am interested in topological underpinnings of shared memory computation as well as understanding the limits of what is learnable through query access. I particularly enjoy applications of analytic and topological tools to problems in theoretical computer science. Also I’m a junior essay writer in the topics of engineering and math

Accessibility