`nearest_common_ancestor()` uses an algorithm that requires computing
the full scope chain for both scopes, which is expensive because each
element involves a hash table lookup, and then looking for a common
tail.
This patch changes `nearest_common_ancestor()` to use a different
algorithm, which starts at the given scopes and works outwards (i.e. up
the scope tree) until a common ancestor is found. This is much faster
because in most cases the common ancestor is found well before the end
of the scope chains. Also, the use of a SmallVec avoids the need for any
allocation most of the time.
proc_macro: Stay on the "use the cache" path more
Discovered in #50061 we're falling off the "happy path" of using a stringified
token stream more often than we should. This was due to the fact that a
user-written token like `0xf` is equality-different from the stringified token
of `15` (despite being semantically equivalent).
This patch updates the call to `eq_unspanned` with an even more awful solution,
`probably_equal_for_proc_macro`, which ignores the value of each token and
basically only compares the structure of the token stream, assuming that the AST
doesn't change just one token at a time.
While this is a step towards fixing #50061 there is still one regression
from #49154 which needs to be fixed.
This commit transitions the `target_feature` attribute from `Normal` to
`Whitelisted`. Discovered in #50095 the fact of whether this attribute is used
or not is dependent on typechecking running and executing `check_name`, but
incremental compilation doesn't currently account for this, meaning that the
attribute ends up being flagged as unused when it shouldn't be.
I was a little too ambitious it seems hoping that `Normal` could be used, so
instead this transitions to `Whitelisted` to be the same as other codegen
attributes like `#[inline]`
Closes#50095
When compiling crates we'll be calculating and parsing `#[target_feature]` for
upstream crates. We'll also be checking the stability of listed features, but we
only want to check the listed stability during the actual crate that wrote the
relevant code. This commit updates the `target_feature` process to ignore
foreign `DefId` instances and only check the feature whitelist for local
functions.
Closes#50094
A Box type with associated allocator would, on its own, be a backwards
incompatible change, because of the additional parameter, but if that
additional parameter has a default, then backwards compatibility with
the current definition of the type is preserved.
But the owned_box lang item currently doesn't allow such extra
parameters, so add support for this.
box_free currently takes a pointer. With the prospect of the Box type
definition changing in the future to include an allocator, box_free will
also need to be aware of this. In order to prepare for that future, we
allow box_free to take a form where its argument are the fields of the
Box.
e.g. if Box is defined as `Box(A, B, C)`, then box_free signature
becomes `box_free(a: A, b: B, c: C)`.
We however still allow the current form (taking a pointer), so that the
same compiler can handle both forms, which helps with bootstrap.
Because box_free is now passed a pointer instead of a Box, we can stop
relying on TypeChecked::check_box_free_inputs, because
TypeChecker::check_call_inputs should be enough, like for all other
function calls.
It seems it was not actually reached anyways in cases where it would
have made a difference. (issue #50071)
Currently, MIR just passes the raw Box to box_free(), which happens to
work because practically, it's the same thing. But that might not be
true in the future, with Box<T, A: Alloc>.
The MIR inline pass actually fixes up the argument while inlining
box_free, but this is not enabled by default and doesn't necessarily
happen (the inline threshold needs to be passed).
This change effectively moves what the MIR inline pass does to the
elaborate_drops pass, so that box_free() is passed the raw pointer
instead of the Box.
This commit tweaks a few stable APIs in the `beta` branch before they hit
stable. The `str::is_whitespace` and `str::is_alphanumeric` functions were
deleted (added in #49381, issue at #49657). The `and_modify` APIs added
in #44734 were altered to take a `FnOnce` closure rather than a `FnMut` closure.
Closes#49581Closes#49657
atomic: remove 'Atomic*' from Debug output
For the same reason that we don't show `Vec { data: [0, 1, 2, 3] }`, but just the array, the `AtomicUsize(1000)` is noisy, and seeing just `1000` is likely better.
This commit increases the dfeault stack size allocated to the
wasm32-unknown-unknown target to 1MB by default. Currently the default stack
size is one wasm page, or 64 kilobytes. This default stack is quite small and
has caused a stack overflow or two in the wild by accident.
The current "best practice" for fixing this is to pass `-Clink-args='-z
stack-size=$bigger'` but that's not great nor always easy to do. A default of
1MB matches more closely with other platforms where it's "pretty big" by
default.
Note that it was tested and if the users uses `-C link-args` to pass a custom
stack size that's still resepected as lld seems to take the first argument, and
where rustc is passing it will always be last.
Add src/test/ui regression testing for NLL
This PR changes `x.py test` so that when you are running the `ui` test suite, it will also always run `compiletest` in the new `--compare-mode=nll`, which just double-checks that when running under the experimental NLL mode, the output matches the `<source-name>.nll.stderr` file, if present.
In order to reduce the chance of a developer revolt in response to this change, this PR also includes some changes to make the `--compare-mode=nll` more user-friendly:
1. It now generates nll-specific .stamp files, and uses them (so that repeated runs can reuse previously cached results).
2. Each line of terminal output distinguishes whether we are running under `--compare-mode=nll` by printing with the prefix `[ui (nll)]` instead of just the prefix `[ui]`.
Subtask of rust-lang/rust#48879
Add rustc_trans to x.py check
r? @Mark-Simulacrum
I looked at `bootstrap/compile.rs` and `bootstrap/check.rs` to try to work out which steps were appropriate, but I'm sure I've overlooked some details here, so it's worth checking carefully I've got all the steps right (e.g. I wasn't sure whether we want to build LLVM if necessary with `x.py check`, though I thought it was probably better to than to not).
From a quick test, it seems to be working, though.
Discovered in #50061 we're falling off the "happy path" of using a stringified
token stream more often than we should. This was due to the fact that a
user-written token like `0xf` is equality-different from the stringified token
of `15` (despite being semantically equivalent).
This patch updates the call to `eq_unspanned` with an even more awful solution,
`probably_equal_for_proc_macro`, which ignores the value of each token and
basically only compares the structure of the token stream, assuming that the AST
doesn't change just one token at a time.
While this is a step towards fixing #50061 there is still one regression
from #49154 which needs to be fixed.
`char_lit` uses an allocation in order to ignore '_' chars in \u{...}
literals. This patch changes it to not do that by processing the chars
more directly.
This improves various rustc-perf benchmark measurements by up to 6%,
particularly regex, futures, clap, coercions, hyper, and encoding.
eval_context.rs calls `ok_or` in multiple places with an eagerly
evaluated `EvalErrorKind::*.into()` argument, which calls
EvalError::from(), which calls env::var("MIRI_BACKTRACE"), which
allocates a String. This code is hot enough for this to have a
measurable effect on some benchmarks.
This patch changes the `ok_or` calls into `ok_or_else`, thus avoiding
the evaluations when they're not needed. As a result, most of the
rustc-perf benchmarks get a measurable speedup, particularly the
shorter-running ones, where the improvement is as high as 6%.
Only emit save-analysis data for `cargo build` tasks
Previously, we were emittinng analysis data for all tasks, including `doc`. That meant we got two sets of save-analysis data, one from the normal build and one from the docs. That means indexing with the RLS took twice as long and made downloads larger and build times longer.
cc https://github.com/rust-lang-nursery/rls/issues/826
r? @Mark-Simulacrum
stabilize a bunch of minor api additions
besides `ptr::NonNull::cast` (which is 4 days away from end of FCP) all of these have been finished with FCP for a few weeks now with minimal issues raised
* Closes#41020
* Closes#42818
* Closes#44030
* Closes#44400
* Closes#46507
* Closes#47653
* Closes#46344
the following functions will be stabilized in 1.27:
* `[T]::rsplit`
* `[T]::rsplit_mut`
* `[T]::swap_with_slice`
* `ptr::swap_nonoverlapping`
* `NonNull::cast`
* `Duration::from_micros`
* `Duration::from_nanos`
* `Duration::subsec_millis`
* `Duration::subsec_micros`
* `HashMap::remove_entry`
This allows easy revision of the update-references.sh script (included
here) so that it can update the expected output for nll rather than
stderr. It also reminds the rustc developer via the filename that they
are looking at output generated under comapre-mode=nll.
One could argue that there is still a problem with the strategy encoded here:
if we reach a scenario where a change to the compiler brings the output
under AST and NLL modes back into sync, this code will continue to still
generate output to distinct `foo.stderr` and `foo.nll.stderr` files, and
will continue to copy those two files back to corresponding distinct
files in the source tree, even if the *content* of the two files is now the
same.
* Arguably the "right thing" to do in that case is to remove the
`foo.nll.stderr` file entirely.
* However, I think the real answer is that we will probably want to
double-check such cases by hand anyway. We should be regularly
double-checking the diffs between `foo.stderr` and
`foo.nll.stderr`, and if we see a zero-diff case, then we should
evaluate whether that is correct, and if so, remove the file by
hand.)
* In any case, I think the default behavior encoded here (or at
least *intended* to be encoded here) is superior to the
alternative of *only* generating a `foo.nll.stderr` file if one
already existed in the source tree at the time that `compiletest`
was invoked (and otherwise unconditionally generating a
`foo.stderr` file, as was the behavior prior to this commit),
because that alternative is more likely to cause rustc developers
to overwrite a `foo.stderr` file with the stderr output from a
compare-mode=nll run, which will then break the *normal*
`compiletest` run and probably be much more confusing for the
average rustc developer.
This commit only applies the flag to the one test case,
ui/span/dropck_vec_cycle_checked.rs, that absolutely needs it. Without
the flag, that test takes an unknown amount of time (greater than 1
minute) to compile. But its possible that other tests would also
benefit from the flag, and we may want to make it the default (after
evaluating its impact on other tests).
In terms of its known impact on other tests, I have only evaluated the
ui tests, and the *only* ui test I have found that the flag impacts
(running under NLL mode, of course), is src/test/ui/nll/issue-31567.rs
In particular:
```
% ./build/x86_64-unknown-linux-gnu/stage1/bin/rustc ../src/test/ui/nll/issue-31567.rs
error[E0597]: `*v.0` does not live long enough
--> ../src/test/ui/nll/issue-31567.rs:22:26
|
22 | let s_inner: &'a S = &*v.0; //~ ERROR `*v.0` does not live long enough
| ^^^^^ borrowed value does not live long enough
23 | &s_inner.0
24 | }
| - borrowed value only lives until here
|
note: borrowed value must be valid for the lifetime 'a as defined on the function body at 21:1...
--> ../src/test/ui/nll/issue-31567.rs:21:1
|
21 | fn get_dangling<'a>(v: VecWrapper<'a>) -> &'a u32 {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0597`.
% ./build/x86_64-unknown-linux-gnu/stage1/bin/rustc ../src/test/ui/nll/issue-31567.rs -Z nll-subminimal-causes
error[E0597]: `*v.0` does not live long enough
--> ../src/test/ui/nll/issue-31567.rs:22:26
|
22 | let s_inner: &'a S = &*v.0; //~ ERROR `*v.0` does not live long enough
| ^^^^^ borrowed value does not live long enough
23 | &s_inner.0
24 | }
| -
| |
| borrowed value only lives until here
| borrow later used here, when `v` is dropped
error: aborting due to previous error
For more information about this error, try `rustc --explain E0597`.
%
```