Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 24 additions & 15 deletions src/building/prerequisites.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,44 @@ See [the `rust-lang/rust` INSTALL](https://github.com/rust-lang/rust/blob/HEAD/I

## Hardware

You will need an internet connection to build. The bootstrapping process
involves updating git submodules and downloading a beta compiler. It doesn't
need to be super fast, but that can help.
You will need an internet connection to build.
The bootstrapping process
involves updating git submodules and downloading a beta compiler.
It doesn't need to be super fast, but that can help.

There are no strict hardware requirements, but building the compiler is
computationally expensive, so a beefier machine will help, and I wouldn't
recommend trying to build on a Raspberry Pi! We recommend the following.
* 30GB+ of free disk space. Otherwise, you will have to keep
clearing incremental caches. More space is better, the compiler is a bit of a
recommend trying to build on a Raspberry Pi!
We recommend the following.
* 30GB+ of free disk space.
Otherwise, you will have to keep clearing incremental caches.
More space is better, the compiler is a bit of a
hog; it's a problem we are aware of.
* 8GB+ RAM
* 2+ cores. Having more cores really helps. 10 or 20 or more is not too many!
* 2+ cores.
Having more cores really helps.
10 or 20 or more is not too many!

Beefier machines will lead to much faster builds. If your machine is not very
powerful, a common strategy is to only use `./x check` on your local machine
Beefier machines will lead to much faster builds.
If your machine is not very powerful,
a common strategy is to only use `./x check` on your local machine
and let the CI build test your changes when you push to a PR branch.

Building the compiler takes more than half an hour on my moderately powerful
laptop. We suggest downloading LLVM from CI so you don't have to build it from source
Building the compiler takes more than half an hour on a moderately powerful
laptop.
We suggest downloading LLVM from CI so you don't have to build it from source
([see here][config]).

Like `cargo`, the build system will use as many cores as possible. Sometimes
this can cause you to run low on memory. You can use `-j` to adjust the number
of concurrent jobs. If a full build takes more than ~45 minutes to an hour, you
Like `cargo`, the build system will use as many cores as possible.
Sometimes this can cause you to run low on memory.
You can use `-j` to adjust the number of concurrent jobs.
If a full build takes more than ~45 minutes to an hour, you
are probably spending most of the time swapping memory in and out; try using
`-j1`.

If you don't have too much free disk space, you may want to turn off
incremental compilation ([see here][config]). This will make compilation take
incremental compilation ([see here][config]).
This will make compilation take
longer (especially after a rebase), but will save a ton of space from the
incremental caches.

Expand Down
7 changes: 4 additions & 3 deletions src/building/suggested.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,9 +154,10 @@ For Neovim users, there are a few options:
#### neoconf.nvim

[neoconf.nvim][neoconf.nvim] allows for project-local configuration
files with the native LSP. The steps for how to use it are below. Note that they require
rust-analyzer to already be configured with Neovim. Steps for this can be
[found here][r-a nvim lsp].
files with the native LSP.
The steps for how to use it are below.
Note that they require rust-analyzer to already be configured with Neovim.
Steps for this can be [found here][r-a nvim lsp].

1. First install the plugin.
This can be done by following the steps in the README.
Expand Down
41 changes: 24 additions & 17 deletions src/mir/construction.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,17 @@ list of items:
* Drop code (the `Drop::drop` function is not called directly)
* Drop implementations of types without an explicit `Drop` implementation

The lowering is triggered by calling the [`mir_built`] query. The MIR builder does
not actually use the HIR but operates on the [THIR] instead, processing THIR
expressions recursively.
The lowering is triggered by calling the [`mir_built`] query.
The MIR builder does not actually use the HIR,
but operates on the [THIR] instead,
processing THIR expressions recursively.

The lowering creates local variables for every argument as specified in the signature.
Next, it creates local variables for every binding specified (e.g. `(a, b): (i32, String)`)
produces 3 bindings, one for the argument, and two for the bindings. Next, it generates
field accesses that read the fields from the argument and writes the value to the binding
variable.
produces 3 bindings, one for the argument, and two for the bindings.
Next,
it generates field accesses that read the fields from the argument,
and writes the value to the binding variable.

With this initialization out of the way, the lowering triggers a recursive call
to a function that generates the MIR for the body (a `Block` expression) and
Expand Down Expand Up @@ -52,7 +54,8 @@ fn generate_more_mir(&mut self, block: BasicBlock) -> BlockAnd<ResultType> {
```

When you invoke these functions, it is common to have a local variable `block`
that is effectively a "cursor". It represents the point at which we are adding new MIR.
that is effectively a "cursor".
It represents the point at which we are adding new MIR.
When you invoke `generate_more_mir`, you want to update this cursor.
You can do this manually, but it's tedious:

Expand Down Expand Up @@ -89,18 +92,21 @@ representations:

We start out with lowering the function body to an `Rvalue` so we can create an
assignment to `RETURN_PLACE`, This `Rvalue` lowering will in turn trigger lowering to
`Operand` for its arguments (if any). `Operand` lowering either produces a `const`
operand, or moves/copies out of a `Place`, thus triggering a `Place` lowering. An
expression being lowered to a `Place` can in turn trigger a temporary to be created
if the expression being lowered contains operations. This is where the snake bites its
`Operand` for its arguments (if any).
`Operand` lowering either produces a `const` operand,
or moves/copies out of a `Place`, thus triggering a `Place` lowering.
An expression being lowered to a `Place` can in turn trigger a temporary to be created
if the expression being lowered contains operations.
This is where the snake bites its
own tail and we need to trigger an `Rvalue` lowering for the expression to be written
into the local.

## Operator lowering

Operators on builtin types are not lowered to function calls (which would end up being
infinite recursion calls, because the trait impls just contain the operation itself
again). Instead there are `Rvalue`s for binary and unary operators and index operations.
again).
Instead there are `Rvalue`s for binary and unary operators and index operations.
These `Rvalue`s later get codegened to llvm primitive operations or llvm intrinsics.

Operators on all other types get lowered to a function call to their `impl` of the
Expand All @@ -118,7 +124,8 @@ In [MIR] there is no difference between method calls and function calls anymore.
## Conditions

`if` conditions and `match` statements for `enum`s with variants that have no fields are
lowered to `TerminatorKind::SwitchInt`. Each possible value (so `0` and `1` for `if`
lowered to `TerminatorKind::SwitchInt`.
Each possible value (so `0` and `1` for `if`
conditions) has a corresponding `BasicBlock` to which the code continues.
The argument being branched on is (again) an `Operand` representing the value of
the if condition.
Expand All @@ -127,14 +134,14 @@ the if condition.

`match` statements for `enum`s with variants that have fields are lowered to
`TerminatorKind::SwitchInt`, too, but the `Operand` refers to a `Place` where the
discriminant of the value can be found. This often involves reading the discriminant
to a new temporary variable.
discriminant of the value can be found.
This often involves reading the discriminant to a new temporary variable.

## Aggregate construction

Aggregate values of any kind (e.g. structs or tuples) are built via `Rvalue::Aggregate`.
All fields are
lowered to `Operator`s. This is essentially equivalent to one assignment
All fields are lowered to `Operator`s.
This is essentially equivalent to one assignment
statement per aggregate field plus an assignment to the discriminant in the
case of `enum`s.

Expand Down
3 changes: 1 addition & 2 deletions src/name-resolution.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,7 @@ files and expanding `macros`.
This phase produces links from all the names in the
source to relevant places where the name was introduced.
It also generates helpful error messages,
like typo suggestions, traits to import or lints about
unused items.
like typo suggestions, traits to import, or lints about unused items.

A successful run of the second phase ([`Resolver::resolve_crate`]) creates kind
of an index the rest of the compilation may use to ask about the present names
Expand Down
23 changes: 15 additions & 8 deletions src/thir.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,30 +15,37 @@ the types have been filled in, which is possible after type checking has complet
But it has some other interesting features that distinguish it from the HIR:

- Like the MIR, the THIR only represents bodies, i.e. "executable code"; this includes
function bodies, but also `const` initializers, for example. Specifically, all [body owners] have
THIR created. Consequently, the THIR has no representation for items like `struct`s or `trait`s.
function bodies, but also `const` initializers, for example.
Specifically, all [body owners] have THIR created.
Consequently, the THIR has no representation for items like `struct`s or `trait`s.

- Each body of THIR is only stored temporarily and is dropped as soon as it's no longer
needed, as opposed to being stored until the end of the compilation process (which
is what is done with the HIR).

- Besides making the types of all nodes available, the THIR also has additional
desugaring compared to the HIR. For example, automatic references and dereferences
desugaring compared to the HIR.
For example, automatic references and dereferences
are made explicit, and method calls and overloaded operators are converted into
plain function calls. Destruction scopes are also made explicit.
plain function calls.
Destruction scopes are also made explicit.

- Statements, expressions, match arms, blocks, and parameters are stored separately. For example,
- Statements, expressions, match arms, blocks, and parameters are stored separately.
For example,
statements in the `stmts` array reference expressions by their index (represented as a
[`ExprId`]) in the `exprs` array.

[HIR]: ./hir.md
[`ExprId`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/thir/struct.ExprId.html
[body owners]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.BodyOwnerKind.html

The THIR lives in [`rustc_mir_build::thir`][thir-docs]. To construct a [`thir::Expr`],
The THIR lives in [`rustc_mir_build::thir`][thir-docs].
To construct a [`thir::Expr`],
you can use the [`thir_body`] function, passing in the memory arena where the THIR
will be allocated. Dropping this arena will result in the THIR being destroyed,
which is useful to keep peak memory in check. Having a THIR representation of
will be allocated.
Dropping this arena will result in the THIR being destroyed,
which is useful to keep peak memory in check.
Having a THIR representation of
all bodies of a crate in memory at the same time would be very heavy.

You can get a debug representation of the THIR by passing the `-Zunpretty=thir-tree` flag
Expand Down
Loading