$$ % Typography and symbols \newcommand{\msf}[1]{\mathsf{#1}} \newcommand{\ctx}{\Gamma} \newcommand{\qamp}{&\quad} \newcommand{\qqamp}{&&\quad} \newcommand{\Coloneqq}{::=} \newcommand{\proves}{\vdash} \newcommand{\star}[1]{#1^{*}} \newcommand{\eps}{\varepsilon} \newcommand{\nul}{\varnothing} \newcommand{\brc}[1]{\{{#1}\}} \newcommand{\binopm}[2]{#1~\bar{\oplus}~#2} \newcommand{\mag}[1]{|{#1}|} \newcommand{\aequiv}{\equiv_\alpha} \newcommand{\semi}[2]{{#1};~{#2}} % Untyped lambda calculus \newcommand{\fun}[2]{\lambda ~ {#1} ~ . ~ {#2}} \newcommand{\app}[2]{#1 ~ #2} \newcommand{\fix}[3]{\msf{fix}~({#1} : {#2}) ~ . ~ #3 } \newcommand{\truet}{\msf{true}} \newcommand{\falset}{\msf{false}} \newcommand{\define}[2]{{#1} \triangleq {#2}} % Typed lambda calculus - expressions \newcommand{\funt}[3]{\lambda ~ \left(#1 : #2\right) ~ . ~ #3} \newcommand{\lett}[4]{\msf{let} ~ \hasType{#1}{#2} = #3 ~ \msf{in} ~ #4} \newcommand{\letrec}[4]{\msf{letrec} ~ \hasType{#1}{#2} = #3 ~ \msf{in} ~ #4}a \newcommand{\ift}[3]{\msf{if} ~ {#1} ~ \msf{then} ~ {#2} ~ \msf{else} ~ {#3}} \newcommand{\rec}[5]{\msf{rec}(#1; ~ #2.#3.#4)(#5)} \newcommand{\case}[5]{\msf{case} ~ {#1} ~ \{ L(#2) \to #3 \mid R(#4) \to #5 \}} \newcommand{\pair}[2]{\left({#1},~{#2}\right)} \newcommand{\proj}[2]{#1 . #2} \newcommand{\inj}[3]{\msf{inj} ~ #1 = #2 ~ \msf{as} ~ #3} \newcommand{\letv}[3]{\msf{let} ~ {#1} = {#2} ~ \msf{in} ~ {#3}} \newcommand{\fold}[2]{\msf{fold}~{#1}~\msf{as}~{#2}} \newcommand{\unfold}[1]{\msf{unfold}~{#1}} \newcommand{\poly}[2]{\Lambda~{#1}~.~ #2} \newcommand{\polyapp}[2]{{#1}~\left[{#2}\right]} \newcommand{\export}[3]{\msf{export}~ #1 ~\msf{without}~{#2}~\msf{as}~ #3} \newcommand{\import}[4]{\msf{import} ~ ({#1}, {#2}) = {#3} ~ \msf{in} ~ #4} % Typed lambda calculus - types \newcommand{\tnum}{\msf{num}} \newcommand{\tstr}{\msf{string}} \newcommand{\tint}{\msf{int}} \newcommand{\tbool}{\msf{bool}} \newcommand{\tfun}[2]{#1 \rightarrow #2} \newcommand{\tprod}[2]{#1 \times #2} \newcommand{\tsum}[2]{#1 + #2} \newcommand{\trec}[2]{\mu~{#1}~.~{#2}} \newcommand{\tvoid}{\msf{void}} \newcommand{\tunit}{\msf{unit}} \newcommand{\tpoly}[2]{\forall~{#1}~.~{#2}} \newcommand{\tmod}[2]{\exists ~ {#1} ~ . ~ #2} % WebAssembly \newcommand{\wconst}[1]{\msf{i32.const}~{#1}} \newcommand{\wbinop}[1]{\msf{i32}.{#1}} \newcommand{\wgetlocal}[1]{\msf{get\_local}~{#1}} \newcommand{\wsetlocal}[1]{\msf{set\_local}~{#1}} \newcommand{\wgetglobal}[1]{\msf{get\_global}~{#1}} \newcommand{\wsetglobal}[1]{\msf{set\_global}~{#1}} \newcommand{\wload}{\msf{i32.load}} \newcommand{\wstore}{\msf{i32.store}} \newcommand{\wsize}{\msf{memory.size}} \newcommand{\wgrow}{\msf{memory.grow}} \newcommand{\wunreachable}{\msf{unreachable}} \newcommand{\wblock}[1]{\msf{block}~{#1}} \newcommand{\wloop}[1]{\msf{loop}~{#1}} \newcommand{\wbr}[1]{\msf{br}~{#1}} \newcommand{\wbrif}[1]{\msf{br\_if}~{#1}} \newcommand{\wreturn}{\msf{return}} \newcommand{\wcall}[1]{\msf{call}~{#1}} \newcommand{\wlabel}[2]{\msf{label}~\{#1\}~{#2}} \newcommand{\wframe}[2]{\msf{frame}~({#1}, {#2})} \newcommand{\wtrapping}{\msf{trapping}} \newcommand{\wbreaking}[1]{\msf{breaking}~{#1}} \newcommand{\wreturning}[1]{\msf{returning}~{#1}} \newcommand{\wconfig}[5]{\{\msf{module}{:}~{#1};~\msf{mem}{:}~{#2};~\msf{locals}{:}~{#3};~\msf{stack}{:}~{#4};~\msf{instrs}{:}~{#5}\}} \newcommand{\wfunc}[4]{\{\msf{params}{:}~{#1};~\msf{locals}{:}~{#2};~\msf{return}~{#3};~\msf{body}{:}~{#4}\}} \newcommand{\wmodule}[1]{\{\msf{funcs}{:}~{#1}\}} \newcommand{\wcg}{\msf{globals}} \newcommand{\wcf}{\msf{funcs}} \newcommand{\wci}{\msf{instrs}} \newcommand{\wcs}{\msf{stack}} \newcommand{\wcl}{\msf{locals}} \newcommand{\wclab}{\msf{labels}} \newcommand{\wcm}{\msf{mem}} \newcommand{\wcmod}{\msf{module}} \newcommand{\wsteps}[2]{\steps{\brc{#1}}{\brc{#2}}} \newcommand{\with}{\underline{\msf{with}}} \newcommand{\wvalid}[2]{{#1} \vdash {#2}~\msf{valid}} \newcommand{\wif}[2]{\msf{if}~{#1}~{\msf{else}}~{#2}} \newcommand{\wfor}[4]{\msf{for}~(\msf{init}~{#1})~(\msf{cond}~{#2})~(\msf{post}~{#3})~{#4}} % assign4.3 custom \newcommand{\wtry}[2]{\msf{try}~{#1}~\msf{catch}~{#2}} \newcommand{\wraise}{\msf{raise}} \newcommand{\wraising}[1]{\msf{raising}~{#1}} \newcommand{\wconst}[1]{\msf{i32.const}~{#1}} \newcommand{\wbinop}[1]{\msf{i32}.{#1}} \newcommand{\wgetlocal}[1]{\msf{get\_local}~{#1}} \newcommand{\wsetlocal}[1]{\msf{set\_local}~{#1}} \newcommand{\wgetglobal}[1]{\msf{get\_global}~{#1}} \newcommand{\wsetglobal}[1]{\msf{set\_global}~{#1}} \newcommand{\wload}{\msf{i32.load}} \newcommand{\wstore}{\msf{i32.store}} \newcommand{\wsize}{\msf{memory.size}} \newcommand{\wgrow}{\msf{memory.grow}} \newcommand{\wunreachable}{\msf{unreachable}} \newcommand{\wblock}[1]{\msf{block}~{#1}} \newcommand{\wloop}[1]{\msf{loop}~{#1}} \newcommand{\wbr}[1]{\msf{br}~{#1}} \newcommand{\wbrif}[1]{\msf{br\_if}~{#1}} \newcommand{\wreturn}{\msf{return}} \newcommand{\wcall}[1]{\msf{call}~{#1}} \newcommand{\wlabel}[2]{\msf{label}~\{#1\}~{#2}} \newcommand{\wframe}[2]{\msf{frame}~({#1}, {#2})} \newcommand{\wtrapping}{\msf{trapping}} \newcommand{\wbreaking}[1]{\msf{breaking}~{#1}} \newcommand{\wreturning}[1]{\msf{returning}~{#1}} \newcommand{\wconfig}[5]{\{\msf{module}{:}~{#1};~\msf{mem}{:}~{#2};~\msf{locals}{:}~{#3};~\msf{stack}{:}~{#4};~\msf{instrs}{:}~{#5}\}} \newcommand{\wfunc}[4]{\{\msf{params}{:}~{#1};~\msf{locals}{:}~{#2};~\msf{return}~{#3};~\msf{body}{:}~{#4}\}} \newcommand{\wmodule}[1]{\{\msf{funcs}{:}~{#1}\}} \newcommand{\wcg}{\msf{globals}} \newcommand{\wcf}{\msf{funcs}} \newcommand{\wci}{\msf{instrs}} \newcommand{\wcs}{\msf{stack}} \newcommand{\wcl}{\msf{locals}} \newcommand{\wcm}{\msf{mem}} \newcommand{\wcmod}{\msf{module}} \newcommand{\wsteps}[2]{\steps{\brc{#1}}{\brc{#2}}} \newcommand{\with}{\underline{\msf{with}}} \newcommand{\wvalid}[2]{{#1} \vdash {#2}~\msf{valid}} % assign4.3 custom \newcommand{\wtry}[2]{\msf{try}~{#1}~\msf{catch}~{#2}} \newcommand{\wraise}{\msf{raise}} \newcommand{\wraising}[1]{\msf{raising}~{#1}} \newcommand{\wif}[2]{\msf{if}~{#1}~{\msf{else}}~{#2}} \newcommand{\wfor}[4]{\msf{for}~(\msf{init}~{#1})~(\msf{cond}~{#2})~(\msf{post}~{#3})~{#4}} \newcommand{\windirect}[1]{\msf{call\_indirect}~{#1}} % session types \newcommand{\ssend}[2]{\msf{send}~{#1};~{#2}} \newcommand{\srecv}[2]{\msf{recv}~{#1};~{#2}} \newcommand{\soffer}[4]{\msf{offer}~\{{#1}\colon({#2})\mid{#3}\colon({#4})\}} \newcommand{\schoose}[4]{\msf{choose}~\{{#1}\colon({#2})\mid{#3}\colon({#4})\}} \newcommand{\srec}[1]{\msf{label};~{#1}} \newcommand{\sgoto}[1]{\msf{goto}~{#1}} \newcommand{\dual}[1]{\overline{#1}} % Inference rules \newcommand{\inferrule}[3][]{\cfrac{#2}{#3}\;{#1}} \newcommand{\ir}[3]{\inferrule[\text{(#1)}]{#2}{#3}} \newcommand{\s}{\hspace{1em}} \newcommand{\nl}{\\[2em]} \newcommand{\evalto}{\boldsymbol{\overset{*}{\mapsto}}} \newcommand{\steps}[2]{#1 \boldsymbol{\mapsto} #2} \newcommand{\evals}[2]{#1 \evalto #2} \newcommand{\subst}[3]{[#1 \rightarrow #2] ~ #3} \newcommand{\dynJ}[2]{#1 \proves #2} \newcommand{\dynJC}[1]{\dynJ{\ctx}{#1}} \newcommand{\typeJ}[3]{#1 \proves \hasType{#2}{#3}} \newcommand{\typeJC}[2]{\typeJ{\ctx}{#1}{#2}} \newcommand{\hasType}[2]{#1 : #2} \newcommand{\val}[1]{#1~\msf{val}} \newcommand{\num}[1]{\msf{Int}(#1)} \newcommand{\err}[1]{#1~\msf{err}} \newcommand{\trans}[2]{#1 \leadsto #2} \newcommand{\size}[1]{\left|#1\right|} $$

&Notepad

Type-directed metaprogramming in Rust

Will Crichton   —   March 18, 2018
I explore how to use Rust compiler internals to metaprogram Rust using information from the typechecker, e.g. to automatically insert garbage-collection into Rust code, and discuss the benefits and drawbacks of this approach.

All code in this note is available in the rustc-type-metaprogramming repository.

Introduction

Metaprogramming, or code that generates code1, is broadly useful in statically typed languages for providing abstractions that are difficult to capture in the base syntax or type system. For example, Rust uses macros for simple pattern-matching-based code substitution (a more powerful and hygienic version of the C preprocessor), e.g. to implement variadic arguments like in println! and early returns like in try!.

fn main() {
    println!("{} {} {}", "This has", "many", "arguments");
}

However, pattern-matching-based metaprogramming tools are limited to simple syntactic transformations. Many common use cases require introspecting a syntactic construct and generating code accordingly, most notably custom derive. In that example, the metaprogram takes a struct and generates code by looking at the struct’s fields, e.g. to automatically generate serializers or SQL queries.

#[derive(Serialize)]
struct Point { x: f32, y: f32 }

fn main() {
    let origin: Point = Point { x: 0, y: 0 };
    println!("{}", origin.to_json()); // {"x": 0, "y": 0}
}

Many of these custom derives are in fact examples of type-directed metaprograms, since they use the type of the struct fields to determine what code to generate. However, this approach has two limitations:

  1. This only works for structs since Rust requires the programmer to explicitly write down the type of each field. Many types in Rust are not written down, but instead inferred by the compiler.
  2. The types are only treated syntactically, not semantically. For example, if the programmer does:
     type MyFloat = f32;
    
     #[derive(Serialize)]
     struct Point { x: MyFloat, y: f32 }
    

    Then then the deriver has no way to understand that the types MyFloat and f32 are the same.

More broadly, the issue is that most compilers refuse to expose their type systems (or other internals) to the outside world. Even today, compilers are largely treated as black boxes whose input is a text file and whose output is either a working binary or an error message. At most, languages like Rust will expose their syntax through procedural macro systems, never providing APIs for types, lifetimes, or properties/IRs.

However, at the same time, compilers are able to infer more than ever about their programs through static analysis. With that comes a tradeoff—requiring the programmer to write down less information about their program (while still being type/memory-safe) makes the programmer more productive. However, in plain text, this makes it more difficult for others to read the same program, as understanding types and lifetimes often help us understand what a piece of code is doing. This is why IDEs are actually taking the charge in cracking open the compiler black box. The folks at Microsoft created both the Language Server Protocol for standardizing a common interface for program navigation/editing, and they’ve also been hard at work on Roslyn, a new API for opening up the C# compiler.

The benefits of extracting knowledge out of the compiler extend well beyond IDEs. As more compiler APIs emerge, statically typed languages can begin to approach dynamically typed languages in their flexibility and extensibility, but without the overhead. It will become easier to use the introspective tools of today (debugging complex data structures, automatic serializer generation) as well as enable the language extensions of tomorrow (type-directed macro parsing, embedded high-performance DSLs). So let’s figure out how much we can already do with our current compilers!

Using the rustc API

Of the moderately popular statically typed languages that I know of, Rust has one of the nicest compilers, rustc, in terms of its documentation and ease of integration. Since rustc is written in Rust, it’s easy to call out to Rust compiler functions in Rust code. Subsequently, in the remainder of this note, we will look at how to use the Rust compiler to do type-directed metaprogramming of Rust code.

Before diving into details, a word of caution: the Rust compiler API is not stable at all, and changes frequently. The specific code in this note will likely be somewhat out of date in a few weeks or months. Running the code requires using the nightly builds. If you are a Rust metaprogramming compiler-hacking fanatic like me, then the specifics will help you understand how to actually use the compiler’s API. Otherwise, you can treat this as an example of what type-directed metaprogramming could look like in a brighter future where these APIs are stable. All of the code below is available in my repository rustc-type-metaprogramming. Let’s get to it!

On a high level, our first goal is just to call the Rust compiler and extract the types of a few fragments of Rust code. The Rust compiler API can be found in their GitHub repo with high-level documentation. To start, we need to create a new crate and put it on nightly:

$ cargo new --bin rustc-type-metaprogramming
$ cd rustc-type-metaprogramming
$ rustup override set nightly-2018-03-19

Then we fill out the src/main.rs file:

#![feature(rustc_private, quote)]
fn main() {}

We use Rust’s feature gates to explicitly declare that we intend to use the private API to rustc as well as the quotation API in libsyntax (more on that later). At this point, we can look to the rustc driver (librustc_driver) to see how the Rust compiler calls its own functions from the top-level (i.e. when the user calls rustc on the command line). Specifically, the run_compiler and compile_input functions show the 10,000 feet view of the compiler stages. A plain English explanation of this is also provided in the documentation.

We need to do a lot of stuff that’s required by the compiler but largely irrelevant for our task (like provide command line options, create code maps for a non-existent source file, set up a bunch of compiler infrastructure). In the code snippets, I omit the uninteresting boilerplate/lifetimes/etc. with ..., but you can find the full working example in the repository. Let’s say we want to type-check the function fn main() { let x = 1 + 2; }. Our metaprogramming function then looks like:

fn main() {
    ...

    let krate = {
        ...
        ast::Crate {
            ...
            quote_item!(fn main() { let x = 1 + 2; })
        }
    };

    let hir = driver::phase_2_configure_and_expand(krate, ...);
    ...

    ty::TyCtxt::create_and_enter(hir, ..., |tcx| {
        typeck::check_create(tcx).unwrap();
        println!("Type checked successfully!");
    });
}

This consists of three steps: first, we need to produce a syntactic representation of the program. One way to do this is to represent the program as a string and then run the rustc parser, e.g.

let prog: &str = "fn main() { let x = 1 + 2; }";
let parser = Parser::new(); // not actually this easy IRL
let func: ast::Item = parser.parse(prog).unwrap();

However, a nicer way to do this is to use quotations, or macros that essentially do the parsing for us. Quotations like quote_item! take as input Rust code and return the programmatic representation of that code as a Rust syntax tree (AST), which we use above. We then wrap the function in a crate, since that’s the input the Rust compiler expects.

Second, we convert the AST into the high-level intermediate representation (HIR), described here. HIR has fewer syntactic forms that the AST the programmer uses, e.g. for loops are converted into loop loops. Lastly, we run the typechecker by creating a type context tcx and run it with TyCtxt::check_crate. If our code snippet typechecks like it does in our example, then this code will execute and print the success message. You can verify this with:

$ cargo run
Type checked successfully!

Extracting types from rustc

Now that we can run the compiler, we next want to extract the types it computes. Let’s say we want to print out the type of every expression in our sample program. One way to do this would be to manually traverse the syntax tree with recursive match statements, but that’s onerous and not easily extensible when the AST changes. Instead, we can use the visitor pattern where we only define behavior for the parts of the syntax tree we care about, and use default implementations for the rest.

Luckily, this is a common pattern in the Rust compiler so they have already implemented much of this machinery for us! Specifically, librustc::hir::intravisit provides a Visitor trait that we can implement to walk through a HIR tree.

struct TestVisitor {
    tcx: ty::TyCtxt
}

impl Visitor for TestVisitor {
    ...

    fn visit_expr(&mut self, expr: &hir::Expr) {
        let ty = self.tcx.type_of(expr); // not actually this easy IRL
        println!("Node: {:?}, type: {:?}", expr, ty);
        intravisit::walk_expr(self, expr);
    }
}

fn main() {
    ...

    ty::TyCtxt::create_and_enter(hir, ..., |tcx| {
        typeck::check_crate(tcx).unwrap();
        let mut visitor = TestVisitor { tcx: tcx };
        tcx.hir.visit(&mut visitor);
    });
}

Running this code, we get the following output:

$ cargo run
Node: expr(13: { let x = 1 + 2; }), type: ()
Node: expr(11: 1 + 2), type: i32
Node: expr(9: 1), type: i32
Node: expr(10: 2), type: i32

Awesome! We were able to access every expression and its type, even though no types were ever written down explicitly. The code works by creating a visitor TestVisitor that contains the type context tcx that tells us the types of expressions when we ask it with type_of. The visitor walks through the HIR tree, and when it finds an expression like 1+2, it calls the function which prints both the expression and its type. Then we call walk_expr which continues recursively visiting the components of the expression, 1 and 2 in this case. Note that in Rust, function bodies are blocks which are expressions, so the block {let x = 1 + 2;} is an expression that has the empty tuple (unit) return type.

Auto-GC for Rust

One possible application of type-directed metaprogramming could be the automated application of garbage collection techniques to selected code blocks. Dealing with lifetimes in Rust can be difficult sometimes, so it would be nice to have the compiler automatically reference count everything by default, which in the simplest (and least-performant) case looks like this:

// before
let x: i32 = 1;
let y: i32 = x + 1;

// after
let x: Rc<i32> = Rc::new(1);
let y: Rc<i32> = Rc::new(*x + 1);

While much of this translation can be done syntactically, knowing the types during translation can help us translate at a finer granularity (only translate certain types) and produce better error messages (an issue with Rc<i32> is actually with i32 in the source code). To demonstrate the simplest proof-of-concept, I implemented this approach as a procedural macro: auto_gc!. For example, if we have a main.rs that looks like:

...
fn main() {
    auto_gc! {
        let x = 1;
        let y: i32 = 1 + x;
    };
    println!("{:?}", *y);
}

Then running cargo expand (equivalent to gcc -E, expands out the macros), this generates:

...
fn main() {
    let x: Rc<i32> = Rc::new(1);
    let y: Rc<i32> = Rc::new(*Rc::new(1) + *Rc::new(*x));
    ...
}

Note that the expanded x has an explicit Rc<i32> type annotation despite not being in the original source, since we could use info from the Rust typechecker.

The implementation (source here) uses a “folder” (instead of a visitor) to generate an output for each node in the HIR, largely keeping the code the same except inserting dereferences and Rc::new calls where appropriate. My code is wrapped in Rust’s procedural macro interface that allows code inside an auto_gc! call to be replaced by arbitrary code generated by my function.

Future work

I’m glad I was able to get this off the ground. Hats off to the Rust developers for the time they’ve invested in documenting the compiler. I think it will pay great dividends for the future, not just for people who want to hack on the compiler, but also for people like me who want to take it in new directions.

That said, after playing around, this approach has a number of logistical challenges today:

  1. HIR wasn’t meant to be transformed like the AST. While the AST module has a Folder trait, the HIR module does not, so I had to implement it myself.
  2. All code generation facilities are targeted towards the AST, not HIR. For example, if I’m folding over a HIR tree, I have to manually define the translation of values between HIR and AST, which is a lot of seemingly unnecessary work. There’s no quotation library for HIR.
  3. Using the compiler (in my own code) inside the compiler (e.g. in a procedural macro definition) can be dangerous. I ran into a tricky bug where I accidentally defined two string interning contexts and had some wacky results when keywords were getting arbitrarily mutated to other words.

None of these issues are fundamental, and largely just mean providing better library support around munging HIR constructs and mapping them back to the AST. I intend to investigate further into what rustc needs to better enable type-directed metaprogramming.

  1. The line between metaprogramming and normal programming is quite blurry. For example, higher-order functions, or functions that return functions as inputs/outputs, are considered routine (distinctly normal) in functional languages like OCaml and Haskell (largely enabled by their currying-by-default). However, in Python, decorators are frequently called metaprograms, despite essentially being normal higher order functions with syntactic sugar.