In a straight-through read of "Syntax and Semantics," the concept of a
"reference" is used here before it is explained. Mention that and link to
the section explaining references.
In a straight-through read of "Syntax and Semantics," the first time we
meet a generic, and the first time we meet a vector, is when a Vec<T> shows
up in this example. I'm not sure that I could argue that the whole section
should appear later in the book than the ones on vectors and generics, so
instead just give the reader a brief introduction to both and a promise to
follow up later.
The motivation (other than removing boilerplate) is that this is a baby step towards a parser with error recovery.
[breaking-change] if you use any of the changed functions, you'll need to remove a try! or panictry!
This is roughly the same as my previous PR that created a dependency graph, but that:
1. The dependency graph is only optionally constructed, though this doesn't seem to make much of a difference in terms of overhead (see measurements below).
2. The dependency graph is simpler (I combined a lot of nodes).
3. The dependency graph debugging facilities are much better: you can now use `RUST_DEP_GRAPH_FILTER` to filter the dep graph to just the nodes you are interested in, which is super help.
4. The tests are somewhat more elaborate, including a few known bugs I need to fix in a second pass.
This is potentially a `[breaking-change]` for plugin authors. If you are poking about in tcx state or something like that, you probably want to add `let _ignore = tcx.dep_graph.in_ignore();`, which will cause your reads/writes to be ignored and not affect the dep-graph.
After this, or perhaps as an add-on to this PR in some cases, what I would like to do is the following:
- [x] Write-up a little guide to how to use this system, the debugging options available, and what the possible failure modes are.
- [ ] Introduce read-only and perhaps the `Meta` node
- [x] Replace "memoization tasks" with node from the map itself
- [ ] Fix the shortcomings, obviously! Notably, the HIR map needs to register reads, and there is some state that is not yet tracked. (Maybe as a separate PR.)
- [x] Refactor the dep-graph code so that the actual maintenance of the dep-graph occurs in a parallel thread, and the main thread simply throws things into a shared channel (probably a fixed-size channel). There is no reason for dep-graph construction to be on the main thread. (Maybe as a separate PR.)
Regarding performance: adding this tracking does add some overhead, approximately 2% in my measurements (I was comparing the build times for rustdoc). Interestingly, enabling or disabling tracking doesn't seem to do very much. I want to poke at this some more and gather a bit more data -- in some tests I've seen that 2% go away, but on others it comes back. It's not entirely clear to me if that 2% is truly due to constructing the dep-graph at all.
The next big step after this is write some code to dump the dep-graph to disk and reload it.
r? @michaelwoerister
This considerably simplifies code around calling functions and translation of Resume itself. This
removes requirement that a block containing Resume terminator is always translated after something
which creates a landing pad, thus allowing us to actually translate some valid MIRs we could not
translate before.
However, an assumption is added that translator is correct (in regards to landing pad generation)
and code will never reach the Resume terminator without going through a landing pad first. Breaking
these assumptions would pass an `undef` value into the personality functions.
This merges two separate Call terminators and uses a separate CallKind sub-enum instead.
A little bit unrelatedly, copying into destination value for a certain kind of invoke, is also
implemented here. See the associated comment in code for various details that arise with this
implementation.
DivergingCall is different enough from the regular converging Call to warrant the split. This also
inlines CallData struct and creates a new CallTargets enum in order to have a way to differentiate
between calls that do not have an associated cleanup block.
Note, that this patch still does not produce DivergingCall terminator anywhere. Look for that in
the next patches.
So far, calls going through `Fn::call`, `FnMut::call_mut`, or `FnOnce::call_once` have not been translated properly into MIR:
The call `f(a, b, c)` where `f: Fn(T1, T2, T3)` would end up in MIR as:
```
call `f` with arguments `a`, `b`, `c`
```
What we really want is:
```
call `Fn::call` with arguments `f`, `a`, `b`, `c`
```
This PR transforms these kinds of overloaded calls during `HIR -> HAIR` translation.
What's still a bit funky is that the `Fn` traits expect arguments to be tupled but due to special handling type-checking and trans, we do not actually tuple arguments and everything still checks out fine. So, after this PR we end up with MIR containing calls where function signature and arguments seemingly don't match:
```
call Fn::call(&self, args: (T1, T2, T3)) with arguments `f`, `a`, `b`, `c`
```
instead of
```
call Fn::call(&self, args: (T1, T2, T3)) with arguments `f`, (`a`, `b`, `c`) // <- args tupled!
```
It would be nice if the call traits could go without special handling in MIR and later on.
random tables. The old code was weird anyway because it would
potentially walk traits from other crates etc. The new code fits
seamlessly with the dependency tracking.
Some history:
While getting Rust to 1.0, it was a struggle to keep the book in a
working state. I had always wanted a certain kind of TOC, but couldn't
quite get it there.
At the 11th hour, I wrote up "Rust inside other langauges" and "Dining
Philosophers" in an attempt to get the book in the direction I wanted to
go. They were fine, but not my best work. I wanted to further expand
this section, but it's just never going to end up happening. We're doing
the second draft of the book now, and these sections are basically gone
already.
Here's the issues with these two sections, and removing them just fixes
it all:
// Philosophers
There was always controversy over which ones were chosen, and why. This
is kind of a perpetual bikeshed, but it comes up every once in a while.
The implementation was originally supposed to show off channels, but
never did, due to time constraints. Months later, I still haven't
re-written it to use them.
People get different results and assume that means they're wrong, rather
than the non-determinism inherent in concurrency. Platform differences
aggrivate this, as does the exact amount of sleeping and printing.
// Rust Inside Other Languages
This section is wonderful, and shows off a strength of Rust. However,
it's not clear what qualifies a language to be in this section. And I'm
not sure how tracking a ton of other languages is gonna work, into the
future; we can't test _anything_ in this section, so it's prone to
bitrot.
By removing this section, and making the Guessing Game an initial
tutorial, we will move this version of the book closer to the future
version, and just eliminate all of these questions.
In addition, this also solves the 'split-brained'-ness of having two
paths, which has endlessly confused people in the past.
I'm sad to see these sections go, but I think it's for the best.
Fixes#30471Fixes#30163Fixes#30162Fixes#25488Fixes#30345Fixes#29590Fixes#28713Fixes#28915
And probably others. This lengthy list alone is enough to show that
these should have been removed.
RIP.
Make `".".parse::<f32>()` and `".".parse::<f64>()` return Err
This fixes#30344.
This is a [breaking-change], which the libs team have classified as a
bug fix.