Compiler Reading Group

[ Next meetings | About the group | Previous meetings | Links ]

Recent and Upcoming Conferences:
[ ASPLOS'04 | CASES'05 | CGO'06 | LCTES'05 | MICRO'05 | PACT'05 | PLDI'06 | PPoPP'06 ]
Also of interest: ISCA, HPCA, POPL, OOPSLA, ICS, CC, Usenix Security, IEEE TOSP, LCPC.

Next Scheduled Meeting(s):

To be announced.

About the Group

The Compiler Reading Group meets weekly to discuss a recent paper in the field of programming languages, compilers, or computer architecture. The discussion is informal and students from all backgrounds are welcome to attend. Each meeting has a designated "presenter", who opens the discussion with a brief (5-10 minute) overview of the paper. While everyone should bring questions and opinions about the paper, the presenter should read the paper a bit more carefully, possibly look at some related work, and facilitate the discussion.

To receive announcements about the group, please join the CRG Mailing List.

Previous Meetings:

Programming Languages for Games: Wed, Apr 19th, 4pm, 32-G825

The Next Mainstream Programming Languages: A Game Developer's Perspective (POPL 2006)  [
Tim Sweeney
(Suggested by All)

In this keynote talk for POPL 2006, Tim Sweeney (of Epic Games) presents a vision for programming language support for game development.

We will discuss (not present) the slides in the reading group, so please bring questions, opinions, discussion items, etc.

Supplementary material (optional):

ISA for Transactions: Mon, April 10th, 11am, 32-G825

Architectural Semantics for Practical Transactional Memory (ISCA'05)  [
Austen McDonald, JaeWoong Chung, Brian D. Carlstrom, Chi Cao Minh, Hassan Cha, Christos Kozyrakis and Kunle Olukotun
(Suggested by Michael Gordon)

Transactional Memory (TM) simplifies parallel programming by allowing for parallel execution of atomic tasks. Thus far, TM systems have focused on implementing transactional state buffering and conflict resolution. Missing is a robust hardware/software interface, not limited to simplistic instructions defining transaction boundaries. Without rich semantics, current TM systems cannot support basic features of modern programming languages and operating systems such as transparent library calls, conditional synchronization, system calls, I/O, and runtime exceptions.

This paper presents a comprehensive instruction set architecture (ISA) for TM systems. Our proposal introduces three key mechanisms: two-phase commit; support for software handlers on commit, violation, and abort; and full support for open- and closed-nested transactions with independent rollback. These mechanisms provide a flexible interface to implement programming language and operating system functionality. We also show that these mechanisms are practical to implement at the ISA and microarchitecture level for various TM systems. Using an execution-driven simulation, we demonstrate both the functionality (e.g., I/O and conditional scheduling within transactions) and performance potential (2.2 improvement for SPECjbb2000) of the proposed mechanisms. Overall, this paper establishes a rich and efficient interface to foster both hardware and software research on transactional memory.

Futures for Java: Wed, Mar 22nd, 4pm, 32-G825

Safe Futures for Java (OOPSLA 2005)  [
Adam Welc, Suresh Jagannathan, and Antony Hosking
(Suggested by Allyn Dimock)

A future is a simple and elegant abstraction that allows concurrency to be expressed often through a relatively small rewrite of a sequential program. In the absence of side-effects, futures serve as benign annotations that mark potentially concurrent regions of code. Unfortunately, when computation relies heavily on mutation as is the case in Java, its meaning is less clear, and much of its intended simplicity lost.

This paper explores the definition and implementation of safe futures for Java. One can think of safe futures as truly transparent annotations on method calls, which designate opportunities for concurrency. Serial programs can be made concurrent simply by replacing standard method calls with future invocations. Most significantly, even though some parts of the program are executed concurrently and may indeed operate on shared data, the semblance of serial execution is nonetheless preserved. Thus, program reasoning is simplified since data dependencies present in a sequential program are not violated in a version augmented with safe futures.

Besides presenting a programming model and API for safe futures, we formalize the safety conditions that must be satisfied to ensure equivalence between a sequential Java program and its future-annotated counterpart. A detailed implementation study is also provided. Our implementation exploits techniques such as object versioning and task revocation to guarantee necessary safety conditions. We also present an extensive experimental evaluation of our implementation to quantify overheads and limitations. Our experiments indicate that for programs with modest mutation rates on shared data, applications can use futures to profitably exploit parallelism, without sacrificing safety.

Language Support for Lightweight Transactions: Wed, March 15th, 4pm, 32-G825

Language Support for Lightweight Transactions (OOPSLA 2003)  [
PDF | PS ]
Tim Harris and Keir Fraser
(Suggested by Rodric Rabbah)

Concurrent programming is notoriously difficult. Current abstractions are intricate and make it hard to design computer systems that are reliable and scalable. We argue that these problems can be addressed by moving to a declarative style of concurrency control in which programmers directly indicate the safety properties that they require.

In our scheme the programmer demarks sections of code which execute within lightweight software-based transactions that commit atomically and exactly once. These transactions can update shared data, instantiate objects,invoke library features and so on. They can also block, waiting for arbitrary boolean conditions to become true. Transactions which do not access the same shared memory locations can commit concurrently. Furthermore, in general, no performance penalty is incurred for memory accesses outside transactions.

We present a detailed design of this proposal along with an implementation and evaluation. We argue that the resulting system (i) is easier for mainstream programmers to use, (ii) prevents lock-based priority-inversion and deadlock problems and (iii) can offer performance advantages.

Common-Case Transactions: Wed, Mar 1st, 4:00pm, 32-G825

The Common Case Transactional Behavior of Multithreaded Programs (HPCA 2006)  [
JaeWoong Chung, Hassan Chafi, Chi Cao Minh, Austen McDonald, Brian D. Carlstrom, Christos Kozyrakis, Kunle Olukotun
(Suggested by Bill Thies)

Transactional memory (TM) provides an easy-to-use and high-performance parallel programming model for the upcoming chip-multiprocessor systems. Several researchers have proposed alternative hardware and software TM implementations. However, the lack of transaction-based programs makes it difficult to understand the merits of each proposal and to tune future TM implementations to the common case behavior of real application.

This work addresses this problem by analyzing the common case transactional behavior for 35 multithreaded programs from a wide range of application domains. We identify transactions within the source code by mapping existing primitives for parallelism and synchronization management to transaction boundaries. The analysis covers basic characteristics such as transaction length, distribution of readset and write-set size, and the frequency of nesting and I/O operations. The measured characteristics provide key insights into the design of efficient TM systems for both nonblocking synchronization and speculative parallelization.

Aggressive Inlining: Wed, Feb 22nd, 4:00pm, 32-G825

Aggressive Inlining (PLDI 1997)  [
Andrew Ayers, Robert Gottlieb, Richard Schooler
(Suggested by Janis Sermulins)

Existing research understates the benefits that can be obtained from inlining and cloning, especially when guided by profile information. Our implementation of inlining and cloning yields excellent results on average and very rarely lowers performance. We believe our good results can be explained by a number of factors: inlining at the intermediate-code level removes most technical restrictions on what can be inlined; the ability to inline across files and incorporate profile information enables us to choose better inline candidates; a high-quality back end can exploit the scheduling and register allocation opportunities presented by larger subroutines; an aggressive processor architecture benefits from more predictable branch behavior; and a large instruction cache mitigates the impact of code expansion. We describe the often dramatic impact of our inlining and cloning on performance: for example, the implementations of our inlining and cloning algorithms in the HP-UX 10.20 compilers boost SPECint95 performance on a PA8000-based workstation by a factor of 1.32.

Soft Error Detection: Wed, Feb 8th, 4pm, 32-G825

Reducing Resource Redundancy for Concurrent Error Detection Techniques in High Performance Microprocessors (HPCA 2006)  [
Sumeet Kumar and Aneesh Aggarwal
(Suggested by Charles O'Donnell)

With reducing feature size, increasing chip capacity, and increasing clock speed, microprocessors are becoming increasingly susceptible to transient (soft) errors. Redundant multi-threading (RMT) is an attractive approach for concurrent error detection and recovery. However, redundant threads significantly increase the pressure on the processor resources, resulting in dramatic performance impact.

In this paper, we propose reducing resource redundancy as a means to mitigate the performance impact of redundancy. In this approach, all the instructions are redundantly executed, however, the redundant instructions do not use many of the resources used by an instruction. The approach taken to reduce resource redundancy is to exploit the runtime profile of the leading thread to optimally allocate resources to the trailing thread in a staggered RMT architecture. The key observation used in this approach is that, even with a small slack between the two threads, many instructions in the leading thread have already produced their results before their trailing counterparts are renamed. We investigate two techniques in this approach (i) register bits reuse technique that attempts to use the same register (but different bits) for both the copies of the same instruction, if the result produced by the instruction is of small size, and (ii) register value reuse technique that attempts to use the same register for a main instruction and a distinct redundant instruction, if both the instructions produce the same result. These techniques, along with some others, are used to reduce redundancy in register file, reorder buffer, and load/store buffer. The techniques are evaluated in terms of their performance, power, and vulnerability impact on an RMT processor. Our experiments show that the techniques achieve about 95% performance improvement and about 17% energy reduction. The vulnerability of the RMT remains the same with the techniques.

Compiling for EDGE Architectures: Wed, Feb 1st, 4pm, 32-G725

Compiling for EDGE Architectures (CGO 2006)  [
Aaron Smith, Jim Burrill, Jon Gibson, Bertrand Maher, Nick Nethercote, Bill Yoder, Doug Burger, Kathryn S. McKinley
(Suggested by Michael Gordon)

Explicit Data Graph Execution (EDGE) architectures offer the possibility of high instruction-level parallelism with energy efficiency. In EDGE architectures, the compiler breaks a program into a sequence of structured blocks that the hardware executes atomically. The instructions within each block communicate directly, instead of communicating through shared registers. The TRIPS EDGE architecture imposes restrictions on its blocks to simplify the microarchitecture: each TRIPS block has at most 128 instructions, issues at most 32 loads and/or stores, and executes at most 32 register bank reads and 32 writes. To detect block completion, each TRIPS block must produce a constant number of outputs (stores and register writes) and a branch decision.

The goal of the TRIPS compiler is to produce TRIPS blocks full of useful instructions while enforcing these constraints. This paper describes a set of compiler algorithms that meet these sometimes conflicting goals, including an algorithm that assigns load and store identifiers to maximize the number of loads and stores within a block. We demonstrate the correctness of these algorithms in simulation on SPEC2000, EEMBC, and microbenchmarks extracted from SPEC2000 and others. We measure speedup in cycles over an Alpha 21264 on microbenchmarks.

Supplementary material (optional):

Review of Dataflow Languages: Wed, Jan 25th, 4pm, 32-G825

Advances in Dataflow Programming Languages (ACM Comp. Surveys 2004) - [MIT-only access]  [
Wesley M. Johnston, J. R. Paul Hanna, Richard J. Millar
(Suggested by Mark Stephenson)

Many developments have taken place within dataflow programming languages in the past decade. In particular, there has been a great deal of activity and advancement in the field of dataflow visual programming languages. The motivation for this article is to review the content of these recent developments and how they came about. It is supported by an initial review of dataflow programming in the 1970s and 1980s that led to current topics of research. It then discusses how dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach. Recent trends toward dataflow visual programming languages are then discussed with reference to key graphical dataflow languages and their development environments. Finally, the article details four key open topics in dataflow programming languages.

Dataflow vs. Superscalar: Wed, Jan 17th, 4:00pm, 32-G825

Dataflow: A Complement to Superscalar (ISPASS 2005)  [
Mihai Budiu, Pedro V. Artigas, Seth Copen Goldstein
(Suggested by Michal Karczmarek)

There has been a resurgence of interest in dataflow architectures, because of their potential for exploiting parallelism with low overhead. In this paper we analyze the performance of a class of static dataflow machines on integer media and control-intensive programs and we explain why a dataflow machine, even with unlimited resources, does not always outperform a superscalar processor on general-purpose codes, under the assumption that both machines take the same time to execute basic operations. We compare a program-specific dataflow machine with unlimited parallelism to a superscalar processor running the same program. While the dataflow machines provide very good performance on most data-parallel programs, we show that the dataflow machine cannot always take advantage of the available parallelism. Using the dynamic critical path we investigate the mechanisms used by superscalar processors to provide a performance advantage and their impact on a dataflow model.

Supplementary material (optional):

Shape Analysis: Wed, Jan 11th, 4:00pm, 32-G825

Static Program Analysis via 3-Valued Logic (CAV'04)  [
PDF | PS ]
Thomas Reps, Mooly Sagiv, Reinhard Wilhelm
(Suggested by Allyn Dimock)

This paper reviews the principles behind the paradigm of "abstract interpretation via 3-valued logic," discusses recent work to extend the approach, and summarizes ongoing research aimed at overcoming remaining limitations on the ability to create program analysis algorithms fully automatically.

Supplementary material (optional):

More background is available in a previous paper by the authors (CC'00): PDF or PS.

Lowering the Barriers to Programming: Wed, Dec 21st, 4:00pm, 32-G825

Lowering the Barriers to Programming: A Taxonomy of Programming Environments and Languages for Novice Programmers (ACM Computing Surveys 2005)
Part 1/2: Pages 83-108 only!
Caitlin Kelleher and Randy Pausch
(Suggested by Bill Thies)

Since the early 1960's, researchers have built a number of programming languages and environments with the intention of making programming accessible to a larger number of people. This article presents a taxonomy of languages and environments designed to make programming more accessible to novice programmers of all ages. The systems are organized by their primary goal, either to teach programming or to use programming to empower their users, and then, by each system's authors' approach, to making learning to program easier for novice programmers. The article explains all categories in the taxonomy, provides a brief description of the systems in each category, and suggests some avenues for future work in novice programming environments and languages.

Market-Based Load Balancing: Wed, Dec 14th, 4:00pm, 32-G825

Contract-Based Load Management in Federated Distributed Systems (NSDI 2004)  [
PDF | PS ]
Magdalena Balazinska, Hari Balakrishnan, Mike Stonebraker
(Suggested by Rodric Rabbah)

This paper focuses on load management in loosely-coupled federated distributed systems. We present a distributed mechanism for moving load between autonomous participants using bilateral contracts that are negotiated offline and that set bounded prices for moving load. We show that our mechanism has good incentive properties, efficiently redistributes excess load, and has a low overhead in practice.

Our load management mechanism is especially well-suited for distributed stream-processing applications, an emerging class of data-intensive applications that employ a "continuous query processing" model. In this model, streams of data are processed and composed continuously as they arrive rather than after they are indexed and stored. We have implemented the mechanism in the Medusa distributed stream processing system, and we demonstrate its properties using simulations and experiments.

Converting Atomicity to Locks: Wed, Dec 7th, 4:00pm, 32-G825

Autolocker: Synchronization Inference for Atomic Sections (POPL 2006)  [
Bill McCloskey, Feng Zhou, David Gay, Eric Brewer
(Suggested by Ben Wagner)

The movement to multi-core processors increases the need for simpler, more robust parallel programming models. Atomic sections have been widely recognized for their ease of use. They are simpler and safer to use than manual locking and they increase modularity. But existing proposals have several practical problems, including high overhead and poor interaction with I/O. We present pessimistic atomic sections, a fresh approach that retains many of the advantages of optimistic atomic sections as seen in "transactional memory" without sacrificing performance or compatibility. Pessimistic atomic sections employ the locking mechanisms familiar to programmers while relieving them of most burdens of lock-based programming, including deadlocks. Significantly, pessimistic atomic sections separate correctness from performance: they allow programmers to extract more parallelism via finer-grained locking without fear of introducing bugs. We believe this property is crucial for exploiting multi-core processor designs.

We describe a tool, Autolocker, that automatically converts pessimistic atomic sections into standard lock-based code. Autolocker relies extensively on program analysis to determine a correct locking policy free of deadlocks and race conditions. We evaluate the expressiveness of Autolocker by modifying a 50,000 line high-performance web server to use atomic sections while retaining the original locking policy. We analyze Autolocker's performance using microbenchmarks, where Autolocker outperforms software transactional memory by more than a factor of 3.

Dynamic Energy Optimization: Wed, Nov 30th, 4pm, 32-G825

A Dynamic Compilation Framework for Controlling Microprocessor Energy and Performance (MICRO 2005, Best Paper)  [
Qiang Wu, V.J. Reddi, Youfeng Wu, Jin Lee, Dan Connors, David Brooks, Margaret Martonosi, Douglas W. Clark
(Suggested by Matthew Drake)

Dynamic voltage and frequency scaling (DVFS) is an effective technique for controlling microprocessor energy and performance. Existing DVFS techniques are primarily based on hardware, OS time interrupts, or static-compiler techniques. However, substantially greater gains can be realized when control opportunities are also explored in a dynamic compilation environment. There are several advantages to deploying DVFS and managing energy/performance tradeoffs through the use of a dynamic compiler. Most importantly, dynamic compiler driven DVFS is fine-grained, code-aware, and adaptive to the current microarchitecture environment.

This paper presents a design framework of the run-time DVFS optimizer in a general dynamic compilation system. A prototype of the DVFS optimizer is implemented and integrated into an industrial strength dynamic compilation system. The obtained optimization system is deployed in a real hardware platform that directly measures CPU voltage and current for accurate power and energy readings. Experimental results, based on physical measurements for over 40 SPEC or Olden benchmarks, show that significant energy savings are achieved with little performance degradation. SPEC2K FP benchmarks benefit with energy savings of up to 70% (with 0.5% performance loss). In addition, SPEC2K INT show up to 44% energy savings (with 5% performance loss), SPEC95 FP save up to 64% (with 4.9% performance loss), and Olden save up to 61% (with 4.5% performance loss). On average, the technique leads to an energy delay product (EDP) improvement that is 3X-5X better than static voltage scaling, and is more than 2X (22% vs. 9%) better than the reported DVFS results of prior static compiler work. While the proposed technique is an effective method for microprocessor voltage and frequency control, the design framework and methodology described in this paper have broader potential to address other energy and power issues such as di/dt and thermal control.

Cross-Run VM Profiling: Wed, Nov 16th, 4:00pm, 32-G825

Improving Virtual Machine Performance Using a Cross-Run Profile Repository (OOPSLA 2005)  [
PDF | PS ]
Matthew Arnold, Adam Welc, V.T. Rajan
(Suggested by Mark Stephenson)

Virtual machines for languages such as the Java programming language make extensive use of online profiling and dynamic optimization to improve program performance. But despite the important role that profiling plays in achieving high performance, current virtual machines discard a program's profile data at the end of execution, wasting the opportunity to use past knowledge to improve future performance.

In this paper, we present a fully automated architecture for exploiting cross-run profile data in virtual machines. Our work addresses a number of challenges that previously limited the practicality of such an approach.

We apply this architecture to address the problem of selective optimization, and describe our implementation in IBM's J9 Java virtual machine. Our results demonstrate substantial performance improvements on a broad suite of Java programs, with the average performance ranging from 8.8% - 16.6% depending on the execution scenario.

Runtime specialization: Wed, Oct 26th, 4:00pm, 32-G825

Runtime Specialization With Optimistic Heap Analysis (OOPSLA 2005)  [
PDF | PS ]
Ajeet Shankar, S. Subramanya Shastry, Rastislav Bodik, James E. Smith
(Suggested by Allyn Dimock)

We describe a highly practical program specializer for Java programs. The specializer is powerful, because it specializes optimistically, using (potentially transient) constants in the heap; it is precise, because it specializes using data structures that are only partially invariant; it is deployable, because it is hidden in a JIT compiler and does not require any user annotations or offline preprocessing; it is simple, because it uses existing JIT compiler ingredients; and it is fast, because it specializes programs in under 1s.These properties are the result of (1) a new algorithm for selecting specializable code fragments, based on a notion of influence; (2) a precise store profile for identifying constant heap locations; and (3) an efficient invalidation mechanism for monitoring optimistic assumptions about heap constants. Our implementation of the specializer in the Jikes RVM has low overhead, selects specialization points that would be chosen manually, and produces speedups ranging from a factor of 1.2 to 6.4, comparable with annotation-guided specializers.

Automatic Instrumentation: Wed, Oct 19th, 2:30pm, 32-G631

Relational Queries Over Program Traces (OOPSLA 2005)  [
PDF | PS ]
Simon Goldsmith, Robert O'Callahan, Alex Aiken
(Suggested by Bill Thies)

Instrumenting programs with code to monitor runtime behavior is a common technique for profiling and debugging. In practice, instrumentation is either inserted manually by programmers, or automatically by specialized tools that monitor particular properties. We propose Program Trace Query Language (PTQL), a language based on relational queries over program traces, in which programmers can write expressive, declarative queries about program behavior. We also describe our compiler, PARTIQLE. Given a PTQL query and a Java program, PARTIQLE instruments the program to execute the query on-line. We apply several PTQL queries to a set of benchmark programs, including the Apache Tomcat Web server. Our queries reveal significant performance bugs in the jack SpecJVM98 benchmark, in Tomcat, and in the IBM Java class library, as well as some correct though uncomfortably subtle code in the Xerces XML parser. We present performance measurements demonstrating that our prototype system has usable performance.

Supplementary material (optional):

Streaming on General Processors: Wed, Oct 12th, 4pm, 32-G825

Stream Programming on General-Purpose Processors (MICRO 2005) - [MIT-only access]  [
PDF | PS ]
Jayanth Gummaraju and Mendel Rosenblum
(Suggested by Rodric Rabbah)

In this paper we investigate mapping stream programs (i.e., programs written in a streaming style for streaming architectures such as Imagine and Raw) onto a general-purpose CPU. We develop and explore a novel way of mapping these programs onto the CPU. We show how the salient features of stream programming such as computation kernels, local memories, and asynchronous bulk memory loads and stores can be easily mapped by a simple compilation system to CPU features such as the processor caches, simultaneous multi-threading, and fast inter-thread communication support, resulting in an executable that eciently uses CPU resources.

We present an evaluation of our mapping on a hyperthreaded Intel Pentium 4 CPU as a canonical example of a general-purpose processor. We compare the mapped stream program against the same program coded in a more conventional style for the general-purpose processor. Using both micro-benchmarks and scientc applications we show that programs written in a streaming style can run comparably to equivalent programs written in a traditional style. Our results show that coding programs in a streaming style can improve performance on today's machines and smooth the way for signcant performance improvements with the deployment of streaming architectures.

Keywords: stream architectures/programming, prefetching, hyper-threading.

Compiler for Cell: Tue, Oct 4th, 4:30pm, 32-D507

Optimizing Compiler for a CELL Processor (PACT 2005)  [
Alexandre E. Eichenberger, Kathryn O'Brien, Kevin O'Brien, Peng Wu, Tong Chen, Peter H. Oden, Daniel A. Prener, Janice C. Shepherd, Byoungro So, Zehra Sura, Amy Wang, Tao Zhang, Peng Zhao, Michael Gschwind
(Suggested by Michael Gordon)

Developed for multimedia and game applications, as well as other numerically intensive workloads, the CELL processor provides support both for highly parallel codes, which have high computation and memory requirements, and for scalar codes, which require fast response time and a full-featured programming environment. This first generation CELL processor implements on a single chip a Power Architecture processor with two levels of cache, and eight attached streaming processors with their own local memories and globally coherent DMA engines. In addition to processor-level parallelism, each processing element has a Single Instruction Multiple Data (SIMD) unit that can process from 2 double precision floating points up to 16 bytes per instruction.

This paper describes, in the context of a research prototype, several compiler techniques that aim at automatically generating high quality codes over a wide range of heterogeneous parallelism available on the CELL processor. Techniques include compiler-supported branch prediction, compiler-assisted instruction fetch, generation of scalar codes on SIMD units, automatic generation of SIMD codes, and data and code partitioning across the multiple processor elements in the system. Results indicate that significant speedup can be achieved with a high level of support from the compiler.

Supplementary material (optional):

Last updated: 21 June 2006