Add AST validation pass and move some checks to it
The purpose of this pass is to catch constructions that fit into AST data structures, but not permitted by the language. As an example, `impl`s don't have visibilities, but for convenience and uniformity with other items they are represented with a structure `Item` which has `Visibility` field.
This pass is intended to run after expansion of macros and syntax extensions (and before lowering to HIR), so it can catch erroneous constructions that were generated by them. This pass allows to remove ad hoc semantic checks from the parser, which can be overruled by syntax extensions and occasionally macros.
The checks can be put here if they are simple, local, don't require results of any complex analysis like name resolution or type checking and maybe don't logically fall into other passes. I expect most of errors generated by this pass to be non-fatal and allowing the compilation to proceed.
I intend to move some more checks to this pass later and maybe extend it with new checks, like, for example, identifier validity. Given that syntax extensions are going to be stabilized in the measurable future, it's important that they would not be able to subvert usual language rules.
In this patch I've added two new checks - a check for labels named `'static` and a check for lifetimes and labels named `'_`. The first one gives a hard error, the second one - a future compatibility warning.
Fixes https://github.com/rust-lang/rust/issues/33059 ([breaking-change])
cc https://github.com/rust-lang/rfcs/pull/1177
r? @nrc
Perform `cfg` attribute processing during macro expansion and fix bugs
This PR refactors `cfg` attribute processing and fixes bugs. More specifically:
- It merges gated feature checking for stmt/expr attributes, `cfg_attr` processing, and `cfg` processing into a single fold.
- This allows feature gated `cfg` variables to be used in `cfg_attr` on unconfigured items. All other feature gated attributes can already be used on unconfigured items.
- It performs `cfg` attribute processing during macro expansion instead of after expansion so that macro-expanded items are configured the same as ordinary items. In particular, to match their non-expanded counterparts,
- macro-expanded unconfigured macro invocations are no longer expanded,
- macro-expanded unconfigured macro definitions are no longer usable, and
- feature gated `cfg` variables on macro-expanded macro definitions/invocations are now errors.
This is a [breaking-change]. For example, the following would break:
```rust
macro_rules! m {
() => {
#[cfg(attr)]
macro_rules! foo { () => {} }
foo!(); // This will be an error
macro_rules! bar { () => { fn f() {} } }
#[cfg(attr)] bar!(); // This will no longer be expanded ...
fn g() { f(); } // ... so that `f` will be unresolved.
#[cfg(target_thread_local)] // This will be a gated feature error
macro_rules! baz { () => {} }
}
}
m!();
```
r? @nrc
rustc: Add a new crate type, cdylib
This commit is an implementation of [RFC 1510] which adds a new crate type,
`cdylib`, to the compiler. This new crate type differs from the existing `dylib`
crate type in a few key ways:
* No metadata is present in the final artifact
* Symbol visibility rules are the same as executables, that is only reachable
`extern` functions are visible symbols
* LTO is allowed
* All libraries are always linked statically
This commit is relatively simple by just plubming the compiler with another
crate type which takes different branches here and there. The only major change
is an implementation of the `Linker::export_symbols` function on Unix which now
actually does something. This helps restrict the public symbols from a cdylib on
Unix.
With this PR a "hello world" `cdylib` is 7.2K while the same `dylib` is 2.4MB,
which is some nice size savings!
[RFC 1510]: https://github.com/rust-lang/rfcs/pull/1510Closes#33132
This commit is an implementation of [RFC 1510] which adds a new crate type,
`cdylib`, to the compiler. This new crate type differs from the existing `dylib`
crate type in a few key ways:
* No metadata is present in the final artifact
* Symbol visibility rules are the same as executables, that is only reachable
`extern` functions are visible symbols
* LTO is allowed
* All libraries are always linked statically
This commit is relatively simple by just plubming the compiler with another
crate type which takes different branches here and there. The only major change
is an implementation of the `Linker::export_symbols` function on Unix which now
actually does something. This helps restrict the public symbols from a cdylib on
Unix.
With this PR a "hello world" `cdylib` is 7.2K while the same `dylib` is 2.4MB,
which is some nice size savings!
[RFC 1510]: https://github.com/rust-lang/rfcs/pull/1510Closes#33132
Only break critical edges where actually needed
Currently, to prepare for MIR trans, we break _all_ critical edges,
although we only actually need to do this for edges originating from a
call that gets translated to an invoke instruction in LLVM.
This has the unfortunate effect of undoing a bunch of the things that
SimplifyCfg has done. A particularly bad case arises when you have a
C-like enum with N variants and a derived PartialEq implementation.
In that case, the match on the (&lhs, &rhs) tuple gets translated into
nested matches with N arms each and a basic block each, resulting in N²
basic blocks. SimplifyCfg reduces that to roughly 2*N basic blocks, but
breaking the critical edges means that we go back to N².
In nickel.rs, there is such an enum with roughly N=800. So we get about
640K basic blocks or 2.5M lines of LLVM IR. LLVM takes a while to
reduce that to the final "disr_a == disr_b".
So before this patch, we had 2.5M lines of IR with 640K basic blocks,
which took about about 3.6s in LLVM to get optimized and translated.
After this patch, we get about 650K lines with about 1.6K basic blocks
and spent a little less than 0.2s in LLVM.
cc #33111
r? @Aatch
Clean up `hir::lowering`
Clean up `hir::lowering`:
- give lowering functions mutable access to the lowering context
- refactor the `lower_*` functions and other functions that take a lowering context into methods
- simplify the API that `hir::lowering` exposes to `driver`
- other miscellaneous cleanups
r? @nrc
Make --emit dep-info work correctly with -Z no-analysis again.
Previously, it would attempt to resolve some external crates that weren't necessary for dep-info output.
Fixes#33231.
Currently, to prepare for MIR trans, we break _all_ critical edges,
although we only actually need to do this for edges originating from a
call that gets translated to an invoke instruction in LLVM.
This has the unfortunate effect of undoing a bunch of the things that
SimplifyCfg has done. A particularly bad case arises when you have a
C-like enum with N variants and a derived PartialEq implementation.
In that case, the match on the (&lhs, &rhs) tuple gets translated into
nested matches with N arms each and a basic block each, resulting in N²
basic blocks. SimplifyCfg reduces that to roughly 2*N basic blocks, but
breaking the critical edges means that we go back to N².
In nickel.rs, there is such an enum with roughly N=800. So we get about
640K basic blocks or 2.5M lines of LLVM IR. LLVM takes a while to
reduce that to the final "disr_a == disr_b".
So before this patch, we had 2.5M lines of IR with 640K basic blocks,
which took about about 3.6s in LLVM to get optimized and translated.
After this patch, we get about 650K lines with about 1.6K basic blocks
and spent a little less than 0.2s in LLVM.
cc #33111
Implement constant support in MIR.
All of the intended features in `trans::consts` are now supported by `mir::constant`.
The implementation is considered a temporary measure until `miri` replaces it.
A `-Z orbit` bootstrap build will only translate LLVM IR from AST for `#[rustc_no_mir]` functions.
Furthermore, almost all checks of constant expressions have been moved to MIR.
In non-`const` functions, trees of temporaries are promoted, as per RFC 1414 (rvalue promotion).
Promotion before MIR borrowck would allow reasoning about promoted values' lifetimes.
The improved checking comes at the cost of four `[breaking-change]`s:
* repeat counts must contain a constant expression, e.g.:
`let arr = [0; { println!("foo"); 5 }];` used to be allowed (it behaved like `let arr = [0; 5];`)
* dereference of a reference to a `static` cannot be used in another `static`, e.g.:
`static X: [u8; 1] = [1]; static Y: u8 = (&X)[0];` was unintentionally allowed before
* the type of a `static` *must* be `Sync`, irrespective of the initializer, e.g.
`static FOO: *const T = &BAR;` worked as `&T` is `Sync`, but it shouldn't because `*const T` isn't
* a `static` cannot wrap `UnsafeCell` around a type that *may* need drop, e.g.
`static X: MakeSync<UnsafeCell<Option<String>>> = MakeSync(UnsafeCell::new(None));`
was previously allowed based on the fact `None` alone doesn't need drop, but in `UnsafeCell`
it can be later changed to `Some(String)` which *does* need dropping
The drop restrictions are relaxed by RFC 1440 (#33156), which is implemented, but feature-gated.
However, creating `UnsafeCell` from constants is unstable, so users can just enable the feature gate.
There is now a CoreEmitter that everything desugars to, but without
losing any information. Also remove RenderSpan::FileLine. This lets the
rustc_driver tests build.
Major changes:
- Remove old snippet rendering code and use the new stuff.
- Introduce `span_label` method to add a label
- Remove EndSpan mode and replace with a fn to get the last
character of a span.
- Stop using `Option<MultiSpan>` and just use an empty `MultiSpan`
- and probably a bunch of other stuff :)
Remove the requirement that ast->hir lowering be reproducible
This PR changes the ast->hir lowerer to be non-reproducible, and it removes the lowering context's id cache.
If the `hir` of an `ast` node needs to be reproduced, we can use the hir map instead of the lowerer -- for example, `tcx.map.expect_expr(expr.id)` instead of `lower_expr(lcx, expr)`.
r? @nrc
rustc_driver: Allow running the compiler with a FileLoader
cc @nrc. I chose to implement this in such a way that it doesn't break anything. Please let me know if you want me to change anything.
Feature gate clean
This PR does a bit of cleaning in the feature-gate-handling code of libsyntax. It also fixes two bugs (#32782 and #32648). Changes include:
* Change the way the existing features are declared in `feature_gate.rs`. The array of features and the `Features` struct are now defined together by a single macro. `featureck.py` has been updated accordingly. Note: there are now three different arrays for active, removed and accepted features instead of a single one with a `Status` item to tell wether a feature is active, removed, or accepted. This is mainly due to the way I implemented my macro in the first time and I can switch back to a single array if needed. But an advantage of the way it is now is that when an active feature is used, the parser only searches through the list of active features. It goes through the other arrays only if the feature is not found. I like to think that error checking (in this case, checking that an used feature is active) does not slow down compilation of valid code. :) But this is not very important...
* Feature-gate checking pass now use the `Features` structure instead of looking through a string vector. This should speed them up a bit. The construction of the `Features` struct should be faster too since it is build directly when parsing features instead of calling `has_feature` dozens of times.
* The MacroVisitor pass has been removed, it was mostly useless since the `#[cfg]-stripping` phase happens before (fixes#32648). The features that must actually be checked before expansion are now checked at the time they are used. This also allows us to check attributes that are generated by macro expansion and not visible to MacroVisitor, but are also removed by macro expansion and thus not visible to PostExpansionVisitor either. This fixes#32782. Note that in order for `#[derive_*]` to be feature-gated but still accepted when generated by `#[derive(Trait)]`, I had to do a little bit of trickery with spans that I'm not totally confident into. Please review that part carefully. (It's in `libsyntax_ext/deriving/mod.rs`.)::
Note: this is a [breaking change], since programs with feature-gated attributes on macro-generated macro invocations were not rejected before. For example:
```rust
macro_rules! bar (
() => ()
);
macro_rules! foo (
() => (
#[allow_internal_unstable] //~ ERROR allow_internal_unstable side-steps
bar!();
);
);
```
foo!();
In fact, we make JSOn the default and add an option for save-analysis-csv for the legacy behaviour.
We also rename some bits and pieces `dxr` -> `save-analysis`
This pass was supposed to check use of gated features before
`#[cfg]`-stripping but this was not the case since it in fact happens
after. Checks that are actually important and must be done before macro
expansion are now made where the features are actually used. Close#32648.
Also ensure that attributes on macro-generated macro invocations are
checked as well. Close#32782 and #32655.
Compute `target_feature` from LLVM
This is a work-in-progress fix for #31662.
The logic that computes the target features from the command line has been replaced with queries to the `TargetMachine`.
Assert that the feature strings are NUL terminated, so that they will
be well-formed as C strings.
This is a safety check to ease the maintaninace and update of the
feature lists.
The different generations of ARM floating point VFP correspond to the
LLVM CPU features named `vfp2`, `vfp3`, and `vfp4`; they are now
exposed in Rust under the same names.
This commit fixes some crashes that would occour when checking if the
`vfp` feature exists (the crash occurs because the linear scan of the
LLVM feature goes past the end of the features whenever it searches
for a feature that does not exist in the LLVM tables).
rustdoc: Fix testing no_run code blocks
This was a regression introduced by #31250 where the compiler deferred returning
the results of compilation a little too late (after the `Stop` check was looked
at). This commit alters the stop point to first try to return an erroneous
`result` and only if it was successful return the sentinel `Err(0)`.
Closes#31576
This was a regression introduced by #31250 where the compiler deferred returning
the results of compilation a little too late (after the `Stop` check was looked
at). This commit alters the stop point to first try to return an erroneous
`result` and only if it was successful return the sentinel `Err(0)`.
Closes#31576
Save/load incremental compilation dep graph
Contains the code to serialize/deserialize the dep graph to disk between executions. We also hash the item contents and compare to the new hashes. Also includes a unit test harness. There are definitely some known limitations, such as https://github.com/rust-lang/rust/issues/32014 and https://github.com/rust-lang/rust/issues/32015, but I am leaving those for follow-up work.
Note that this PR builds on https://github.com/rust-lang/rust/pull/32007, so the overlapping commits can be excluded from review.
r? @michaelwoerister
You can now pass `-Z incremental=dir` as well as saying `-Z
query-dep-graph` if you want to enable queries for some other
purpose. Accessor functions take the place of computed boolean flags.
Plumb obligations through librustc/infer
Like #32542, but more like #31867.
TODO before merge: make an issue for the propagation of obligations through... uh, everywhere... then replace the `#????`s with the actual issue number.
cc @jroesch
r? @nikomatsakis
rBreak Critical Edges and other MIR work
This PR is built on top of #32080.
This adds the basic depth-first traversals for MIR, preorder, postorder and reverse postorder. The MIR blocks are now translated using reverse postorder. There is also a transform for breaking critical edges, which includes the edges from `invoke`d calls (`Drop` and `Call`), to account for the fact that we can't add code after an `invoke`. It also stops generating the intermediate block (since the transform essentially does it if necessary already).
The kinds of cases this deals with are difficult to produce, so the test is the one I managed to get. However, it seems to bootstrap with `-Z orbit`, which it didn't before my changes.
allow RUST_BACKTRACE=0 to act as if unset
**UPDATE:** `RUST_BACKTRACE=0` to act as if the env. var is unset! (now `0` is what `disabled` was for, below)
When RUST_BACKTRACE is set to "disabled" then this acts as if the env. var is unset. So, either make sure `RUST_BACKTRACE` is not set OR set it to `disabled` to achieve the same effect.
Sample usage:
```bash
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && RUST_BACKTRACE=disabled /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
note: Run with `RUST_BACKTRACE=1` for a backtrace.
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && RUST_BACKTRACE=1 /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
stack backtrace:
1: 0x55709e8148c0 - sys::backtrace::tracing:👿:write::h140f24a0cfc189b98Ru
2: 0x55709e816a5b - panicking::default_hook::_$u7b$$u7b$closure$u7d$$u7d$::closure.45165
3: 0x55709e8166e8 - panicking::default_hook::hed419823688cb82aXoA
4: 0x55709e810fff - sys_common::unwind::begin_unwind_inner::hbb9642f6e212d56fmHt
5: 0x55709e810513 - sys_common::unwind::begin_unwind::h16232867470678019594
6: 0x55709e810489 - main::hb524f9576270962feaa
7: 0x55709e816314 - sys_common::unwind::try::try_fn::h1274188004693518534
8: 0x55709e813dfb - __rust_try
9: 0x55709e815dab - rt::lang_start::h712b1cd650781872ahA
10: 0x55709e810679 - main
11: 0x7efd1026859f - __libc_start_main
12: 0x55709e810348 - _start
13: 0x0 - <unknown>
```
Some programs(eg. [vim's syntactic](https://github.com/scrooloose/syntastic) used by [rust.vim](https://github.com/rust-lang/rust.vim)) cannot unset the env. var RUST_BACKTRACE if it's already set(eg. in .bashrc) but [they can set it to some value](cb5533e159/system/Z575/OSes/gentoo/on_baremetal/filesystem_now/gentoo/home/zazdxscf/build/1nonpkgs/rust.vim/upd (L17)), and I needed to ensure the env. var is unset in order to avoid this issue: https://github.com/rust-lang/rust/issues/29293
**EDIT:** Sample usage 2:
```bash
$ export RUST_BACKTRACE=1
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
stack backtrace:
1: 0x55c2696738c0 - sys::backtrace::tracing:👿:write::h140f24a0cfc189b98Ru
2: 0x55c269675a5b - panicking::default_hook::_$u7b$$u7b$closure$u7d$$u7d$::closure.45165
3: 0x55c2696756e8 - panicking::default_hook::hed419823688cb82aXoA
4: 0x55c26966ffff - sys_common::unwind::begin_unwind_inner::hbb9642f6e212d56fmHt
5: 0x55c26966f513 - sys_common::unwind::begin_unwind::h16023941661074805588
6: 0x55c26966f489 - main::hb524f9576270962feaa
7: 0x55c269675314 - sys_common::unwind::try::try_fn::h1274188004693518534
8: 0x55c269672dfb - __rust_try
9: 0x55c269674dab - rt::lang_start::h712b1cd650781872ahA
10: 0x55c26966f679 - main
11: 0x7f593d58459f - __libc_start_main
12: 0x55c26966f348 - _start
13: 0x0 - <unknown>
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && RUST_BACKTRACE=disabled /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
note: Run with `RUST_BACKTRACE=1` for a backtrace.
```
rustbuild: Fix dist for non-host targets
The `rust-std` package that we produce is expected to have not only the standard
library but also libtest for compiling unit tests. Unfortunately this does not
currently happen due to the way rustbuild is structured.
There are currently two main stages of compilation in rustbuild, one for the
standard library and one for the compiler. This is primarily done to allow us to
fill in the sysroot right after the standard library has finished compiling to
continue compiling the rest of the crates. Consequently the entire compiler does
not have to explicitly depend on the standard library, and this also should
allow us to pull in crates.io dependencies into the build in the future because
they'll just naturally build against the std we just produced.
These phases, however, do not represent a cross-compiled build. Target-only
builds also require libtest, and libtest is currently part of the
all-encompassing "compiler build". There's unfortunately no way to learn about
just libtest and its dependencies (in a great and robust fashion) so to ensure
that we can copy the right artifacts over this commit introduces a new build
step, libtest.
The new libtest build step has documentation, dist, and link steps as std/rustc
already do. The compiler now depends on libtest instead of libstd, and all
compiler crates can now assume that test and its dependencies are implicitly
part of the sysroot (hence explicit dependencies being removed). This makes the
build a tad less parallel as in theory many rustc crates can be compiled in
parallel with libtest, but this likely isn't where we really need parallelism
either (all the time is still spent in the compiler).
All in all this allows the `dist-std` step to depend on both libstd and libtest,
so `rust-std` packages produced by rustbuild should start having both the
standard library and libtest.
Closes#32523
The `rust-std` package that we produce is expected to have not only the standard
library but also libtest for compiling unit tests. Unfortunately this does not
currently happen due to the way rustbuild is structured.
There are currently two main stages of compilation in rustbuild, one for the
standard library and one for the compiler. This is primarily done to allow us to
fill in the sysroot right after the standard library has finished compiling to
continue compiling the rest of the crates. Consequently the entire compiler does
not have to explicitly depend on the standard library, and this also should
allow us to pull in crates.io dependencies into the build in the future because
they'll just naturally build against the std we just produced.
These phases, however, do not represent a cross-compiled build. Target-only
builds also require libtest, and libtest is currently part of the
all-encompassing "compiler build". There's unfortunately no way to learn about
just libtest and its dependencies (in a great and robust fashion) so to ensure
that we can copy the right artifacts over this commit introduces a new build
step, libtest.
The new libtest build step has documentation, dist, and link steps as std/rustc
already do. The compiler now depends on libtest instead of libstd, and all
compiler crates can now assume that test and its dependencies are implicitly
part of the sysroot (hence explicit dependencies being removed). This makes the
build a tad less parallel as in theory many rustc crates can be compiled in
parallel with libtest, but this likely isn't where we really need parallelism
either (all the time is still spent in the compiler).
All in all this allows the `dist-std` step to depend on both libstd and libtest,
so `rust-std` packages produced by rustbuild should start having both the
standard library and libtest.
Closes#32523
/# This is a combination of 16 commits.
/# The first commit's message is:
allow RUST_BACKTRACE=disabled to act as if unset
When RUST_BACKTRACE is set to "disabled" then this acts as if the env.
var is unset.
/# This is the 2nd commit message:
case insensitive "DiSaBLeD" RUST_BACKTRACE value
previously it expected a lowercase "disabled" to treat the env. var as
unset
/# This is the 3rd commit message:
RUST_BACKTRACE=0 acts as if unset
previously RUST_BACKTRACE=disabled was doing the same thing
/# This is the 4th commit message:
RUST_BACKTRACE=0|n|no|off acts as if unset
previously only RUST_BACKTRACE=0 acted as if RUST_BACKTRACE was unset
Now added more options (case-insensitive): 'n','no' and 'off'
eg. RUST_BACKTRACE=oFF
/# This is the 5th commit message:
DRY on the value of 2
DRY=don't repeat yourself
Because having to remember to keep the two places of '2' in sync is not
ideal, even though this is a simple enough case.
/# This is the 6th commit message:
Revert "DRY on the value of 2"
This reverts commit 95a0479d5cf72a2b2d9d21ec0bed2823ed213fef.
Nevermind this DRY on 2, because we already have a RY on 1,
besides the code is less readable this way...
/# This is the 7th commit message:
attempt to document unsetting RUST_BACKTRACE
/# This is the 8th commit message:
curb allocations when checking for RUST_BACKTRACE
this means we don't check for case-insensitivity anymore
/# This is the 9th commit message:
as decided, RUST_BACKTRACE=0 turns off backtrace
/# This is the 10th commit message:
RUST_TEST_NOCAPTURE=0 acts as if unset
(that is, capture is on)
Any other value acts as if nocapture is enabled (that is, capture is off)
/# This is the 11th commit message:
update other RUST_TEST_NOCAPTURE occurrences
apparently only one place needs updating
/# This is the 12th commit message:
update RUST_BACKTRACE in man page
/# This is the 13th commit message:
handle an occurrence of RUST_BACKTRACE
/# This is the 14th commit message:
ensure consistency with new rules for backtrace
/# This is the 15th commit message:
a more concise comment for RUST_TEST_NOCAPTURE
/# This is the 16th commit message:
update RUST_TEST_NOCAPTURE in man page
Also adds a new set of passes to run just before translation that
"prepare" the MIR for codegen. Removal of landing pads, region erasure
and break critical edges are run in this pass.
Also fixes some merge/rebase errors.
Gate parser recovery via debugflag
Gate parser recovery via debugflag
Put in `-Z continue_parse_after_error`
This works by adding a method, `fn abort_if_no_parse_recovery`, to the
diagnostic handler in `syntax::errors`, and calling it after each
error is emitted in the parser.
(We might consider adding a debugflag to do such aborts in other
places where we are currently attempting recovery, such as resolve,
but I think the parser is the really important case to handle in the
face of #31994 and the parser bugs of varying degrees that were
injected by parse error recovery.)
r? @nikomatsakis
This works by adding a boolean flag, `continue_after_error`, to
`syntax::errors::Handler` that can be imperatively set to `true` or
`false` via a new `fn set_continue_after_error`.
The flag starts off true (since we generally try to recover from
compiler errors, and `Handler` is shared across all phases).
Then, during the `phase_1_parse_input`, we consult the setting of the
`-Z continue-parse-after-error` debug flag to determine whether we
should leave the flag set to `true` or should change it to `false`.
----
(We might consider adding a debugflag to do such aborts in other
places where we are currently attempting recovery, such as resolve,
but I think the parser is the really important case to handle in the
face of #31994 and the parser bugs of varying degrees that were
injected by parse error recovery.)
This is a fairly standard transform that inserts blocks along critical
edges so code can be inserted along the edge without it affecting other
edges. The main difference is that it considers a Drop or Call
terminator that would require an `invoke` instruction in LLVM a critical
edge. This is because we can't actually insert code after an invoke, so
it ends up looking similar to a critical edge anyway.
The transform is run just before translation right now.
Automated conversion using the untry tool [1] and the following command:
```
$ find -name '*.rs' -type f | xargs untry
```
at the root of the Rust repo.
[1]: https://github.com/japaric/untry
Move analysis for MIR borrowck
This PR adds code for doing MIR-based gathering of the moves in a `fn` and the dataflow to determine where uninitialized locations flow to, analogous to how the same thing is done in `borrowck`.
It also adds a couple attributes to print out graphviz visualizations of the analyzed MIR that includes the dataflow analysis results.
cc @nikomatsakis
emit (via debug!) scary message from `fn borrowck_mir` until basic
prototype is in place.
Gather children of move paths and set their kill bits in
dataflow. (Each node has a link to the child that is first among its
siblings.)
Hooked in libgraphviz based rendering, including of borrowck dataflow
state.
doing this well required some refactoring of the code, so I cleaned it
up more generally (adding comments to explain what its trying to do
and how it is doing it).
Update: this newer version addresses most review comments (at least
the ones that were largely mechanical changes), but I left the more
interesting revisions to separate followup commits (in this same PR).
Implement RFC 1210: impl specialization
This PR implements [impl specialization](https://github.com/rust-lang/rfcs/pull/1210),
carefully following the proposal laid out in the RFC.
The implementation covers the bulk of the RFC. The remaining gaps I know of are:
- no checking for lifetime-dependent specialization (a soundness hole);
- no `default impl` yet;
- no support for `default` with associated consts;
I plan to cover these gaps in follow-up PRs, as per @nikomatsakis's preference.
The basic strategy is to build up a *specialization graph* during
coherence checking. Insertion into the graph locates the right place
to put an impl in the specialization hierarchy; if there is no right
place (due to partial overlap but no containment), you get an overlap
error. Specialization is consulted when selecting an impl (of course),
and the graph is consulted when propagating defaults down the
specialization hierarchy.
You might expect that the specialization graph would be used during
selection -- i.e., when actually performing specialization. This is
not done for two reasons:
- It's merely an optimization: given a set of candidates that apply,
we can determine the most specialized one by comparing them directly
for specialization, rather than consulting the graph. Given that we
also cache the results of selection, the benefit of this
optimization is questionable.
- To build the specialization graph in the first place, we need to use
selection (because we need to determine whether one impl specializes
another). Dealing with this reentrancy would require some additional
mode switch for selection. Given that there seems to be no strong
reason to use the graph anyway, we stick with a simpler approach in
selection, and use the graph only for propagating default
implementations.
Trait impl selection can succeed even when multiple impls can apply,
as long as they are part of the same specialization family. In that
case, it returns a *single* impl on success -- this is the most
specialized impl *known* to apply. However, if there are any inference
variables in play, the returned impl may not be the actual impl we
will use at trans time. Thus, we take special care to avoid projecting
associated types unless either (1) the associated type does not use
`default` and thus cannot be overridden or (2) all input types are
known concretely.
r? @nikomatsakis
Add Pass manager for MIR
A new PR, since rebasing the original one (https://github.com/rust-lang/rust/pull/31448) properly was a pain. Since then there has been several changes most notable of which:
1. Removed the pretty-printing with `#[rustc_mir(graphviz/pretty)]`, mostly because we now have `--unpretty=mir`, IMHO that’s the direction we should expand this functionality into;
2. Reverted the infercx change done for typeck, because typeck can make an infercx for itself by being a `MirMapPass`
r? @nikomatsakis
Distinguish fn item types to allow reification from nothing to fn pointers.
The first commit is a rebase of #26284, except for files that have moved since.
This is a [breaking-change], due to:
* each FFI function has a distinct type, like all other functions currently do
* all generic parameters on functions are recorded in their item types, e.g.:
`size_of::<u8>` & `size_of::<i8>`'s types differ despite their identical signature.
* function items are zero-sized, which will stop transmutes from working on them
The first two cases are handled in most cases with the new coerce-unify logic,
which will combine incompatible function item types into function pointers,
at the outer-most level of if-else chains, match arms and array literals.
The last case is specially handled during type-checking such that transmutes
from a function item type to a pointer or integer type will continue to work for
another release cycle, but are being linted against. To get rid of warnings and
ensure your code will continue to compile, cast to a pointer before transmuting.
Fix incorrect trait privacy error
This PR fixes#21670 by using the crate metadata instead of `ExternalExports` to determine if an external item is public.
r? @nikomatsakis
There's a lot of stuff wrong with the representation of these types:
TyFnDef doesn't actually uniquely identify a function, TyFnPtr is used to
represent method calls, TyFnDef in the sub-expression of a cast isn't
correctly reified, and probably some other stuff I haven't discovered yet.
Splitting them seems like the right first step, though.
Show `cfg` as possible argument to `--print` and make it so that `--print cfg` also outputs the `target_feature`s.
Should I also extend `src/test/run-make/print-cfg/Makefile` to check that `target_feature`s are actually printed?