This avoids the compile-time overhead of computing them twice. It also fixes
an issue where the hash computed after typeck is differen than the hash before,
because typeck mutates the def-map in place.
Fixes#35549.
Fixes#35593.
Kicking off libproc_macro
This PR introduces `libproc_macro`, which is currently quite bare-bones (just a few macro construction tools and an initial `quote!` macro).
This PR also introduces a few test cases for it, and an additional `shim` file (at `src/libsyntax/ext/proc_macro_shim.rs` to allow a facsimile usage of Macros 2.0 *today*!
Implement the `!` type
This implements the never type (`!`) and hides it behind the feature gate `#[feature(never_type)]`. With the feature gate off, things should build as normal (although some error messages may be different). With the gate on, `!` is usable as a type and diverging type variables (ie. types that are unconstrained by anything in the code) will default to `!` instead of `()`.
Take commandline arguments into account for incr. comp.
Implements the conservative strategy described in https://github.com/rust-lang/rust/issues/33727.
From now one, every time a new commandline option is added, one has to specify if it influences the incremental compilation cache. I've tried to implement this as automatic as possible: One just has to added either the `[TRACKED]` or the `[UNTRACKED]` marker next to the field. The `Options`, `CodegenOptions`, and `DebuggingOptions` definitions in `session::config` show plenty of examples.
The PR removes some cruft from `session::config::Options`, mostly unnecessary copies of flags also present in `DebuggingOptions` or `CodeGenOptions` in the same struct.
One notable removal is the `cfg` field that contained the values passed via `--cfg` commandline arguments. I chose to remove it because (1) its content is only a subset of what later is stored in `hir::Crate::config` and it's pretty likely that reading the cfgs from `Options` would not be what you wanted, and (2) we could not incorporate it into the dep-tracking hash of the `Options` struct because of how the test framework works, leaving us with a piece of untracked but vital data.
It is now recommended (just as before) to access the crate config via the `krate()` method in the HIR map.
Because the `cfg` field is not present in the `Options` struct any more, some methods in the `CompilerCalls` trait now take the crate config as an explicit parameter -- which might constitute a breaking change for plugin authors.
Implement `impl Trait` in return type position by anonymization.
This is the first step towards implementing `impl Trait` (cc #34511).
`impl Trait` types are only allowed in function and inherent method return types, and capture all named lifetime and type parameters, being invariant over them.
No lifetimes that are not explicitly named lifetime parameters are allowed to escape from the function body.
The exposed traits are only those listed explicitly, i.e. `Foo` and `Clone` in `impl Foo + Clone`, with the exception of "auto traits" (like `Send` or `Sync`) which "leak" the actual contents.
The implementation strategy is anonymization, i.e.:
```rust
fn foo<T>(xs: Vec<T>) -> impl Iterator<Item=impl FnOnce() -> T> {
xs.into_iter().map(|x| || x)
}
// is represented as:
type A</*invariant over*/ T> where A<T>: Iterator<Item=B<T>>;
type B</*invariant over*/ T> where B<T>: FnOnce() -> T;
fn foo<T>(xs: Vec<T>) -> A<T> {
xs.into_iter().map(|x| || x): $0 where $0: Iterator<Item=$1>, $1: FnOnce() -> T
}
```
`$0` and `$1` are resolved (to `iter::Map<vec::Iter<T>, closure>` and the closure, respectively) and assigned to `A` and `B`, after checking the body of `foo`. `A` and `B` are *never* resolved for user-facing type equality (typeck), but always for the low-level representation and specialization (trans).
The "auto traits" exception is implemented by collecting bounds like `impl Trait: Send` that have failed for the obscure `impl Trait` type (i.e. `A` or `B` above), pretending they succeeded within the function and trying them again after type-checking the whole crate, by replacing `impl Trait` with the real type.
While passing around values which have explicit lifetime parameters (of the function with `-> impl Trait`) in their type *should* work, regionck appears to assign inference variables in *way* too many cases, and never properly resolving them to either explicit lifetime parameters, or `'static`.
We might not be able to handle lifetime parameters in `impl Trait` without changes to lifetime inference, but type parameters can have arbitrary lifetimes in them from the caller, so most type-generic usecases (or not generic at all) should not run into this problem.
cc @rust-lang/lang
The 'cfg' in the Options struct is only the commandline-specified
subset of the crate configuration and it's almost always wrong to
read that instead of the CrateConfig in HIR crate node.
Commandline arguments influence whether incremental compilation
can use its compilation cache and thus their changes relative to
previous compilation sessions need to be taking into account. This
commit makes sure that one has to specify for every commandline
argument whether it influences incremental compilation or not.
Various improvements to the SVH
This fixes a few points for the SVH:
- incorporate resolve results into the SVH;
- don't include nested items.
r? @michaelwoerister
cc #32753 (not fully fixed I don't think)
Turn on new errors and json mode
This PR is a big-switch, but on a well-worn path:
* Turns on new errors by default (and removes old skool)
* Moves json output from behind a flag
The RFC for new errors [landed](https://github.com/rust-lang/rfcs/pull/1644) and as part of that we wanted some bake time. It's now had a few weeks + all the time leading up to the RFC of people banging on it. We've also had [editors updating to the new format](https://github.com/saviorisdead/RustyCode/pull/159) and expect more to follow.
We also have an [issue on old skool](https://github.com/rust-lang/rust/issues/35330) that needs to be fixed as more errors are switched to the new style, but it seems silly to fix old skool errors when we fully intend to throw the switch in the near future.
This makes it lean towards "why not just throw the switch now, rather than waiting a couple more weeks?" I only know of vim that wanted to try to parse the new format but were not sure how, and I think we can reach out to them and work out something in the 8 weeks before this would appear in a stable release.
We've [hashed out](https://github.com/rust-lang/rust/issues/35330) stabilizing JSON output, and it seems like people are relatively happy making what we have v1 and then likely adding to it in the future. The idea is that we'd maintain backward compatibility and just add new fields as needed. We'll also work on a separate output format that'd be better suited for interactive tools like IDES (since JSON message can get a little long depending on the error).
This PR stabilizes JSON mode, allowing its use without `-Z unstable-options`
Combined, this gives editors two ways to support errors going forward: parsing the new error format or using the JSON mode. By moving JSON to stable, we can also add support to Cargo, which plugin authors tell us does help simplify their support story.
r? @nikomatsakis
cc @rust-lang/tools
Closes https://github.com/rust-lang/rust/issues/34826
Address ICEs running w/ incremental compilation and building glium
Fixes for various ICEs I encountered trying to build glium with incremental compilation enabled. Building glium now works. Of the 4 ICEs, I have test cases for 3 of them -- I didn't isolate a test for the last commit and kind of want to go do other things -- most notably, figuring out why incremental isn't saving much *effort*.
But if it seems worthwhile and I can come back and try to narrow down the problem.
r? @michaelwoerister
Fixes#34991Fixes#32015
Per the discussion on #34765, we make one `DepNode::Mir` variant and use
it to represent both the MIR tracking map as well as passes that operate
on MIR. We also track loads of cached MIR (which naturally comes from
metadata).
Note that the "HAIR" pass adds a read of TypeckItemBody because it uses
a myriad of tables that are not individually tracked.
[MIR] Deaggregate structs to enable further optimizations
Currently, we generate MIR like:
```
tmp0 = ...;
tmp1 = ...;
tmp3 = Foo { a: ..., b: ... };
```
This PR implements "deaggregation," i.e.:
```
tmp3.0 = ...
tmp3.1 = ...
```
Currently, the code only deaggregates structs, not enums. My understanding is that we do not have MIR to set the discriminant of an enum.
Better attribute and metaitem encapsulation throughout the compiler
This PR refactors most (hopefully all?) of the `MetaItem` interactions outside of `libsyntax` (and a few inside) to interact with MetaItems through the provided traits instead of directly creating / destruct / matching against them. This is a necessary first step to eventually converting `MetaItem`s to internally use `TokenStream` representations (which will make `MetaItem` interactions much nicer for macro writers once the new macro system is in place).
r? @nrc
Convert built-in targets to JSON
Convert the built-in targets to JSON to ensure that the JSON parser is always fully featured. This follows on #32988 and #32847. The PR includes a number of extra commits that are just intermediate changes necessary for bisectibility and the ability to prove correctness of the change.
We used to use `Name`, but the session outlives the tokenizer, which
means that attempts to read this field after trans has complete
otherwise panic. All reads want an `InternedString` anyhow.
Since we can know which targets are instantiable on a particular host,
it does not make sense to list invalid targets in the target print code.
Filter the list of targets to only include the targets that can be
instantiated.
Simplify librustc_errors
This is part 2 of the error crate refactor, starting with #34403.
In this refactor, I focused on slimming down the error crate to fewer moving parts. As such, I've removed quite a few parts and replaced the with simpler, straight-line code. Specifically, this PR:
* Removes BasicEmitter
* Remove emit from emitter, leaving emit_struct
* Renames emit_struct to emit
* Removes CoreEmitter and focuses on a single Emitter
* Implements the latest changes to error format RFC (#1644)
* Removes (now-unused) code in emitter.rs and snippet.rs
* Moves more tests to the UI tester, removing some duplicate tests in the process
There is probably more that could be done with some additional refactoring, but this felt like it was getting to a good state.
r? @alexcrichton cc: @Manishearth (as there may be breaking changes in stuff I removed/changed)
Simplify the macro hygiene algorithm
This PR removes renaming from the hygiene algorithm and treats differently marked identifiers as unequal.
This change makes the scope of identifiers in `macro_rules!` items empty. That is, identifiers in `macro_rules!` definitions do not inherit any semantics from the `macro_rules!`'s scope.
Since `macro_rules!` macros are items, the scope of their identifiers "should" be the same as that of other items; in particular, the scope should contain only items. Since all items are unhygienic today, this would mean the scope should be empty.
However, the scope of an identifier in a `macro_rules!` statement today is the scope that the identifier would have if it replaced the `macro_rules!` (excluding anything unhygienic, i.e. locals only).
To continue to support this, this PR tracks the scope of each `macro_rules!` and uses it in `resolve` to ensure that an identifier expanded from a `macro_rules!` gets a chance to resolve to the locals in the `macro_rules!`'s scope.
This PR is a pure refactoring. After this PR,
- `syntax::ext::expand` is much simpler.
- We can expand macros in any order without causing problems for hygiene (needed for macro modularization).
- We can deprecate or remove today's `macro_rules!` scope easily.
- Expansion performance improves by 25%, post-expansion memory usage decreases by ~5%.
- Expanding a block is no longer quadratic in the number of `let` statements (fixes#10607).
r? @nrc
Add x86 intrinsics for bit manipulation (BMI 1.0, BMI 2.0, and TBM).
This PR adds the LLVM x86 intrinsics for the bit manipulation instruction sets (BMI 1.0, BMI 2.0, and TBM).
The objective of this pull-request is to allow building a library that implements all the algorithms offered by those instruction sets, using compiler intrinsics for the targets that support them (by means of `target_feature`).
The target features added are:
- `bmi`: Bit Manipulation Instruction Set 1.0, available in Intel >= Haswell and AMD's >= Jaguar/Piledriver,
- `bmi2`: Bit Manipulation Instruction Set 2.0, available in Intel >= Haswell and AMD's >= Excavator,
- `tbm`: Trailing Bit Manipulation, available only in AMD's Piledriver (won't be available in newer CPUs).
The intrinsics added are:
- BMI 1.0:
- `bextr`: Bit field extract (with register).
- BMI 2.0:
- `bzhi`: Zero high bits starting with specified bit position.
- `pdep`: Parallel bits deposit.
- `pext`: Parallel bits extract.
- TBM:
- `bextri`: Bit field extract (with immediate).
To allow these braced macro invocation, this PR removes the optional expression from `ast::Block` and instead uses a `StmtKind::Expr` at the end of the statement list.
Currently, braced macro invocations in blocks can expand into statements (and items) except when they are last in a block, in which case they can only expand into expressions.
For example,
```rust
macro_rules! make_stmt {
() => { let x = 0; }
}
fn f() {
make_stmt! {} //< This is OK...
let x = 0; //< ... unless this line is commented out.
}
```
Fixes#34418.
MIR cleanups and predecessor cache
This PR cleans up a few things in MIR and adds a predecessor cache to allow graph algorithms to be run easily.
r? @nikomatsakis
Refactor away the prelude injection fold
Instead, just inject `#[prelude_import] use [core|std]::prelude::v1::*;` at the crate root while injecting `extern crate [core|std];` and process `#[no_implicit_prelude]` attributes in `resolve`.
r? @nrc
rustc: Try to contain prepends to PATH
This commit attempts to bring our prepends to PATH on Windows when loading
plugins because we've been seeing quite a few issues with failing to spawn a
process on Windows, the leading theory of which is that PATH is too large as a
result of this. Currently this is mostly a stab in the dark as it's not
confirmed to actually fix the problem, but it's probably not a bad change to
have anyway!
cc #33844Closes#17360
This commit attempts to bring our prepends to PATH on Windows when loading
plugins because we've been seeing quite a few issues with failing to spawn a
process on Windows, the leading theory of which is that PATH is too large as a
result of this. Currently this is mostly a stab in the dark as it's not
confirmed to actually fix the problem, but it's probably not a bad change to
have anyway!
cc #33844Closes#17360
Add AST validation pass and move some checks to it
The purpose of this pass is to catch constructions that fit into AST data structures, but not permitted by the language. As an example, `impl`s don't have visibilities, but for convenience and uniformity with other items they are represented with a structure `Item` which has `Visibility` field.
This pass is intended to run after expansion of macros and syntax extensions (and before lowering to HIR), so it can catch erroneous constructions that were generated by them. This pass allows to remove ad hoc semantic checks from the parser, which can be overruled by syntax extensions and occasionally macros.
The checks can be put here if they are simple, local, don't require results of any complex analysis like name resolution or type checking and maybe don't logically fall into other passes. I expect most of errors generated by this pass to be non-fatal and allowing the compilation to proceed.
I intend to move some more checks to this pass later and maybe extend it with new checks, like, for example, identifier validity. Given that syntax extensions are going to be stabilized in the measurable future, it's important that they would not be able to subvert usual language rules.
In this patch I've added two new checks - a check for labels named `'static` and a check for lifetimes and labels named `'_`. The first one gives a hard error, the second one - a future compatibility warning.
Fixes https://github.com/rust-lang/rust/issues/33059 ([breaking-change])
cc https://github.com/rust-lang/rfcs/pull/1177
r? @nrc
Perform `cfg` attribute processing during macro expansion and fix bugs
This PR refactors `cfg` attribute processing and fixes bugs. More specifically:
- It merges gated feature checking for stmt/expr attributes, `cfg_attr` processing, and `cfg` processing into a single fold.
- This allows feature gated `cfg` variables to be used in `cfg_attr` on unconfigured items. All other feature gated attributes can already be used on unconfigured items.
- It performs `cfg` attribute processing during macro expansion instead of after expansion so that macro-expanded items are configured the same as ordinary items. In particular, to match their non-expanded counterparts,
- macro-expanded unconfigured macro invocations are no longer expanded,
- macro-expanded unconfigured macro definitions are no longer usable, and
- feature gated `cfg` variables on macro-expanded macro definitions/invocations are now errors.
This is a [breaking-change]. For example, the following would break:
```rust
macro_rules! m {
() => {
#[cfg(attr)]
macro_rules! foo { () => {} }
foo!(); // This will be an error
macro_rules! bar { () => { fn f() {} } }
#[cfg(attr)] bar!(); // This will no longer be expanded ...
fn g() { f(); } // ... so that `f` will be unresolved.
#[cfg(target_thread_local)] // This will be a gated feature error
macro_rules! baz { () => {} }
}
}
m!();
```
r? @nrc
rustc: Add a new crate type, cdylib
This commit is an implementation of [RFC 1510] which adds a new crate type,
`cdylib`, to the compiler. This new crate type differs from the existing `dylib`
crate type in a few key ways:
* No metadata is present in the final artifact
* Symbol visibility rules are the same as executables, that is only reachable
`extern` functions are visible symbols
* LTO is allowed
* All libraries are always linked statically
This commit is relatively simple by just plubming the compiler with another
crate type which takes different branches here and there. The only major change
is an implementation of the `Linker::export_symbols` function on Unix which now
actually does something. This helps restrict the public symbols from a cdylib on
Unix.
With this PR a "hello world" `cdylib` is 7.2K while the same `dylib` is 2.4MB,
which is some nice size savings!
[RFC 1510]: https://github.com/rust-lang/rfcs/pull/1510Closes#33132
This commit is an implementation of [RFC 1510] which adds a new crate type,
`cdylib`, to the compiler. This new crate type differs from the existing `dylib`
crate type in a few key ways:
* No metadata is present in the final artifact
* Symbol visibility rules are the same as executables, that is only reachable
`extern` functions are visible symbols
* LTO is allowed
* All libraries are always linked statically
This commit is relatively simple by just plubming the compiler with another
crate type which takes different branches here and there. The only major change
is an implementation of the `Linker::export_symbols` function on Unix which now
actually does something. This helps restrict the public symbols from a cdylib on
Unix.
With this PR a "hello world" `cdylib` is 7.2K while the same `dylib` is 2.4MB,
which is some nice size savings!
[RFC 1510]: https://github.com/rust-lang/rfcs/pull/1510Closes#33132
Only break critical edges where actually needed
Currently, to prepare for MIR trans, we break _all_ critical edges,
although we only actually need to do this for edges originating from a
call that gets translated to an invoke instruction in LLVM.
This has the unfortunate effect of undoing a bunch of the things that
SimplifyCfg has done. A particularly bad case arises when you have a
C-like enum with N variants and a derived PartialEq implementation.
In that case, the match on the (&lhs, &rhs) tuple gets translated into
nested matches with N arms each and a basic block each, resulting in N²
basic blocks. SimplifyCfg reduces that to roughly 2*N basic blocks, but
breaking the critical edges means that we go back to N².
In nickel.rs, there is such an enum with roughly N=800. So we get about
640K basic blocks or 2.5M lines of LLVM IR. LLVM takes a while to
reduce that to the final "disr_a == disr_b".
So before this patch, we had 2.5M lines of IR with 640K basic blocks,
which took about about 3.6s in LLVM to get optimized and translated.
After this patch, we get about 650K lines with about 1.6K basic blocks
and spent a little less than 0.2s in LLVM.
cc #33111
r? @Aatch
Clean up `hir::lowering`
Clean up `hir::lowering`:
- give lowering functions mutable access to the lowering context
- refactor the `lower_*` functions and other functions that take a lowering context into methods
- simplify the API that `hir::lowering` exposes to `driver`
- other miscellaneous cleanups
r? @nrc
Make --emit dep-info work correctly with -Z no-analysis again.
Previously, it would attempt to resolve some external crates that weren't necessary for dep-info output.
Fixes#33231.
Currently, to prepare for MIR trans, we break _all_ critical edges,
although we only actually need to do this for edges originating from a
call that gets translated to an invoke instruction in LLVM.
This has the unfortunate effect of undoing a bunch of the things that
SimplifyCfg has done. A particularly bad case arises when you have a
C-like enum with N variants and a derived PartialEq implementation.
In that case, the match on the (&lhs, &rhs) tuple gets translated into
nested matches with N arms each and a basic block each, resulting in N²
basic blocks. SimplifyCfg reduces that to roughly 2*N basic blocks, but
breaking the critical edges means that we go back to N².
In nickel.rs, there is such an enum with roughly N=800. So we get about
640K basic blocks or 2.5M lines of LLVM IR. LLVM takes a while to
reduce that to the final "disr_a == disr_b".
So before this patch, we had 2.5M lines of IR with 640K basic blocks,
which took about about 3.6s in LLVM to get optimized and translated.
After this patch, we get about 650K lines with about 1.6K basic blocks
and spent a little less than 0.2s in LLVM.
cc #33111
Implement constant support in MIR.
All of the intended features in `trans::consts` are now supported by `mir::constant`.
The implementation is considered a temporary measure until `miri` replaces it.
A `-Z orbit` bootstrap build will only translate LLVM IR from AST for `#[rustc_no_mir]` functions.
Furthermore, almost all checks of constant expressions have been moved to MIR.
In non-`const` functions, trees of temporaries are promoted, as per RFC 1414 (rvalue promotion).
Promotion before MIR borrowck would allow reasoning about promoted values' lifetimes.
The improved checking comes at the cost of four `[breaking-change]`s:
* repeat counts must contain a constant expression, e.g.:
`let arr = [0; { println!("foo"); 5 }];` used to be allowed (it behaved like `let arr = [0; 5];`)
* dereference of a reference to a `static` cannot be used in another `static`, e.g.:
`static X: [u8; 1] = [1]; static Y: u8 = (&X)[0];` was unintentionally allowed before
* the type of a `static` *must* be `Sync`, irrespective of the initializer, e.g.
`static FOO: *const T = &BAR;` worked as `&T` is `Sync`, but it shouldn't because `*const T` isn't
* a `static` cannot wrap `UnsafeCell` around a type that *may* need drop, e.g.
`static X: MakeSync<UnsafeCell<Option<String>>> = MakeSync(UnsafeCell::new(None));`
was previously allowed based on the fact `None` alone doesn't need drop, but in `UnsafeCell`
it can be later changed to `Some(String)` which *does* need dropping
The drop restrictions are relaxed by RFC 1440 (#33156), which is implemented, but feature-gated.
However, creating `UnsafeCell` from constants is unstable, so users can just enable the feature gate.
There is now a CoreEmitter that everything desugars to, but without
losing any information. Also remove RenderSpan::FileLine. This lets the
rustc_driver tests build.
Major changes:
- Remove old snippet rendering code and use the new stuff.
- Introduce `span_label` method to add a label
- Remove EndSpan mode and replace with a fn to get the last
character of a span.
- Stop using `Option<MultiSpan>` and just use an empty `MultiSpan`
- and probably a bunch of other stuff :)
Remove the requirement that ast->hir lowering be reproducible
This PR changes the ast->hir lowerer to be non-reproducible, and it removes the lowering context's id cache.
If the `hir` of an `ast` node needs to be reproduced, we can use the hir map instead of the lowerer -- for example, `tcx.map.expect_expr(expr.id)` instead of `lower_expr(lcx, expr)`.
r? @nrc
rustc_driver: Allow running the compiler with a FileLoader
cc @nrc. I chose to implement this in such a way that it doesn't break anything. Please let me know if you want me to change anything.
Feature gate clean
This PR does a bit of cleaning in the feature-gate-handling code of libsyntax. It also fixes two bugs (#32782 and #32648). Changes include:
* Change the way the existing features are declared in `feature_gate.rs`. The array of features and the `Features` struct are now defined together by a single macro. `featureck.py` has been updated accordingly. Note: there are now three different arrays for active, removed and accepted features instead of a single one with a `Status` item to tell wether a feature is active, removed, or accepted. This is mainly due to the way I implemented my macro in the first time and I can switch back to a single array if needed. But an advantage of the way it is now is that when an active feature is used, the parser only searches through the list of active features. It goes through the other arrays only if the feature is not found. I like to think that error checking (in this case, checking that an used feature is active) does not slow down compilation of valid code. :) But this is not very important...
* Feature-gate checking pass now use the `Features` structure instead of looking through a string vector. This should speed them up a bit. The construction of the `Features` struct should be faster too since it is build directly when parsing features instead of calling `has_feature` dozens of times.
* The MacroVisitor pass has been removed, it was mostly useless since the `#[cfg]-stripping` phase happens before (fixes#32648). The features that must actually be checked before expansion are now checked at the time they are used. This also allows us to check attributes that are generated by macro expansion and not visible to MacroVisitor, but are also removed by macro expansion and thus not visible to PostExpansionVisitor either. This fixes#32782. Note that in order for `#[derive_*]` to be feature-gated but still accepted when generated by `#[derive(Trait)]`, I had to do a little bit of trickery with spans that I'm not totally confident into. Please review that part carefully. (It's in `libsyntax_ext/deriving/mod.rs`.)::
Note: this is a [breaking change], since programs with feature-gated attributes on macro-generated macro invocations were not rejected before. For example:
```rust
macro_rules! bar (
() => ()
);
macro_rules! foo (
() => (
#[allow_internal_unstable] //~ ERROR allow_internal_unstable side-steps
bar!();
);
);
```
foo!();
In fact, we make JSOn the default and add an option for save-analysis-csv for the legacy behaviour.
We also rename some bits and pieces `dxr` -> `save-analysis`
This pass was supposed to check use of gated features before
`#[cfg]`-stripping but this was not the case since it in fact happens
after. Checks that are actually important and must be done before macro
expansion are now made where the features are actually used. Close#32648.
Also ensure that attributes on macro-generated macro invocations are
checked as well. Close#32782 and #32655.
Compute `target_feature` from LLVM
This is a work-in-progress fix for #31662.
The logic that computes the target features from the command line has been replaced with queries to the `TargetMachine`.
Assert that the feature strings are NUL terminated, so that they will
be well-formed as C strings.
This is a safety check to ease the maintaninace and update of the
feature lists.
The different generations of ARM floating point VFP correspond to the
LLVM CPU features named `vfp2`, `vfp3`, and `vfp4`; they are now
exposed in Rust under the same names.
This commit fixes some crashes that would occour when checking if the
`vfp` feature exists (the crash occurs because the linear scan of the
LLVM feature goes past the end of the features whenever it searches
for a feature that does not exist in the LLVM tables).
rustdoc: Fix testing no_run code blocks
This was a regression introduced by #31250 where the compiler deferred returning
the results of compilation a little too late (after the `Stop` check was looked
at). This commit alters the stop point to first try to return an erroneous
`result` and only if it was successful return the sentinel `Err(0)`.
Closes#31576
This was a regression introduced by #31250 where the compiler deferred returning
the results of compilation a little too late (after the `Stop` check was looked
at). This commit alters the stop point to first try to return an erroneous
`result` and only if it was successful return the sentinel `Err(0)`.
Closes#31576
Save/load incremental compilation dep graph
Contains the code to serialize/deserialize the dep graph to disk between executions. We also hash the item contents and compare to the new hashes. Also includes a unit test harness. There are definitely some known limitations, such as https://github.com/rust-lang/rust/issues/32014 and https://github.com/rust-lang/rust/issues/32015, but I am leaving those for follow-up work.
Note that this PR builds on https://github.com/rust-lang/rust/pull/32007, so the overlapping commits can be excluded from review.
r? @michaelwoerister
You can now pass `-Z incremental=dir` as well as saying `-Z
query-dep-graph` if you want to enable queries for some other
purpose. Accessor functions take the place of computed boolean flags.
Plumb obligations through librustc/infer
Like #32542, but more like #31867.
TODO before merge: make an issue for the propagation of obligations through... uh, everywhere... then replace the `#????`s with the actual issue number.
cc @jroesch
r? @nikomatsakis
rBreak Critical Edges and other MIR work
This PR is built on top of #32080.
This adds the basic depth-first traversals for MIR, preorder, postorder and reverse postorder. The MIR blocks are now translated using reverse postorder. There is also a transform for breaking critical edges, which includes the edges from `invoke`d calls (`Drop` and `Call`), to account for the fact that we can't add code after an `invoke`. It also stops generating the intermediate block (since the transform essentially does it if necessary already).
The kinds of cases this deals with are difficult to produce, so the test is the one I managed to get. However, it seems to bootstrap with `-Z orbit`, which it didn't before my changes.
allow RUST_BACKTRACE=0 to act as if unset
**UPDATE:** `RUST_BACKTRACE=0` to act as if the env. var is unset! (now `0` is what `disabled` was for, below)
When RUST_BACKTRACE is set to "disabled" then this acts as if the env. var is unset. So, either make sure `RUST_BACKTRACE` is not set OR set it to `disabled` to achieve the same effect.
Sample usage:
```bash
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && RUST_BACKTRACE=disabled /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
note: Run with `RUST_BACKTRACE=1` for a backtrace.
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && RUST_BACKTRACE=1 /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
stack backtrace:
1: 0x55709e8148c0 - sys::backtrace::tracing:👿:write::h140f24a0cfc189b98Ru
2: 0x55709e816a5b - panicking::default_hook::_$u7b$$u7b$closure$u7d$$u7d$::closure.45165
3: 0x55709e8166e8 - panicking::default_hook::hed419823688cb82aXoA
4: 0x55709e810fff - sys_common::unwind::begin_unwind_inner::hbb9642f6e212d56fmHt
5: 0x55709e810513 - sys_common::unwind::begin_unwind::h16232867470678019594
6: 0x55709e810489 - main::hb524f9576270962feaa
7: 0x55709e816314 - sys_common::unwind::try::try_fn::h1274188004693518534
8: 0x55709e813dfb - __rust_try
9: 0x55709e815dab - rt::lang_start::h712b1cd650781872ahA
10: 0x55709e810679 - main
11: 0x7efd1026859f - __libc_start_main
12: 0x55709e810348 - _start
13: 0x0 - <unknown>
```
Some programs(eg. [vim's syntactic](https://github.com/scrooloose/syntastic) used by [rust.vim](https://github.com/rust-lang/rust.vim)) cannot unset the env. var RUST_BACKTRACE if it's already set(eg. in .bashrc) but [they can set it to some value](cb5533e159/system/Z575/OSes/gentoo/on_baremetal/filesystem_now/gentoo/home/zazdxscf/build/1nonpkgs/rust.vim/upd (L17)), and I needed to ensure the env. var is unset in order to avoid this issue: https://github.com/rust-lang/rust/issues/29293
**EDIT:** Sample usage 2:
```bash
$ export RUST_BACKTRACE=1
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
stack backtrace:
1: 0x55c2696738c0 - sys::backtrace::tracing:👿:write::h140f24a0cfc189b98Ru
2: 0x55c269675a5b - panicking::default_hook::_$u7b$$u7b$closure$u7d$$u7d$::closure.45165
3: 0x55c2696756e8 - panicking::default_hook::hed419823688cb82aXoA
4: 0x55c26966ffff - sys_common::unwind::begin_unwind_inner::hbb9642f6e212d56fmHt
5: 0x55c26966f513 - sys_common::unwind::begin_unwind::h16023941661074805588
6: 0x55c26966f489 - main::hb524f9576270962feaa
7: 0x55c269675314 - sys_common::unwind::try::try_fn::h1274188004693518534
8: 0x55c269672dfb - __rust_try
9: 0x55c269674dab - rt::lang_start::h712b1cd650781872ahA
10: 0x55c26966f679 - main
11: 0x7f593d58459f - __libc_start_main
12: 0x55c26966f348 - _start
13: 0x0 - <unknown>
$ rustc -o /tmp/a.out -- <(echo 'fn main(){ panic!() }') && RUST_BACKTRACE=disabled /tmp/a.out
!! executing '/home/zazdxscf/build/1nonpkgs/rust/rust//x86_64-unknown-linux-gnu/stage2/bin//rustc' with args: '-o /tmp/a.out -- /dev/fd/63'
thread '<main>' panicked at 'explicit panic', /dev/fd/63:1
note: Run with `RUST_BACKTRACE=1` for a backtrace.
```
rustbuild: Fix dist for non-host targets
The `rust-std` package that we produce is expected to have not only the standard
library but also libtest for compiling unit tests. Unfortunately this does not
currently happen due to the way rustbuild is structured.
There are currently two main stages of compilation in rustbuild, one for the
standard library and one for the compiler. This is primarily done to allow us to
fill in the sysroot right after the standard library has finished compiling to
continue compiling the rest of the crates. Consequently the entire compiler does
not have to explicitly depend on the standard library, and this also should
allow us to pull in crates.io dependencies into the build in the future because
they'll just naturally build against the std we just produced.
These phases, however, do not represent a cross-compiled build. Target-only
builds also require libtest, and libtest is currently part of the
all-encompassing "compiler build". There's unfortunately no way to learn about
just libtest and its dependencies (in a great and robust fashion) so to ensure
that we can copy the right artifacts over this commit introduces a new build
step, libtest.
The new libtest build step has documentation, dist, and link steps as std/rustc
already do. The compiler now depends on libtest instead of libstd, and all
compiler crates can now assume that test and its dependencies are implicitly
part of the sysroot (hence explicit dependencies being removed). This makes the
build a tad less parallel as in theory many rustc crates can be compiled in
parallel with libtest, but this likely isn't where we really need parallelism
either (all the time is still spent in the compiler).
All in all this allows the `dist-std` step to depend on both libstd and libtest,
so `rust-std` packages produced by rustbuild should start having both the
standard library and libtest.
Closes#32523
The `rust-std` package that we produce is expected to have not only the standard
library but also libtest for compiling unit tests. Unfortunately this does not
currently happen due to the way rustbuild is structured.
There are currently two main stages of compilation in rustbuild, one for the
standard library and one for the compiler. This is primarily done to allow us to
fill in the sysroot right after the standard library has finished compiling to
continue compiling the rest of the crates. Consequently the entire compiler does
not have to explicitly depend on the standard library, and this also should
allow us to pull in crates.io dependencies into the build in the future because
they'll just naturally build against the std we just produced.
These phases, however, do not represent a cross-compiled build. Target-only
builds also require libtest, and libtest is currently part of the
all-encompassing "compiler build". There's unfortunately no way to learn about
just libtest and its dependencies (in a great and robust fashion) so to ensure
that we can copy the right artifacts over this commit introduces a new build
step, libtest.
The new libtest build step has documentation, dist, and link steps as std/rustc
already do. The compiler now depends on libtest instead of libstd, and all
compiler crates can now assume that test and its dependencies are implicitly
part of the sysroot (hence explicit dependencies being removed). This makes the
build a tad less parallel as in theory many rustc crates can be compiled in
parallel with libtest, but this likely isn't where we really need parallelism
either (all the time is still spent in the compiler).
All in all this allows the `dist-std` step to depend on both libstd and libtest,
so `rust-std` packages produced by rustbuild should start having both the
standard library and libtest.
Closes#32523
/# This is a combination of 16 commits.
/# The first commit's message is:
allow RUST_BACKTRACE=disabled to act as if unset
When RUST_BACKTRACE is set to "disabled" then this acts as if the env.
var is unset.
/# This is the 2nd commit message:
case insensitive "DiSaBLeD" RUST_BACKTRACE value
previously it expected a lowercase "disabled" to treat the env. var as
unset
/# This is the 3rd commit message:
RUST_BACKTRACE=0 acts as if unset
previously RUST_BACKTRACE=disabled was doing the same thing
/# This is the 4th commit message:
RUST_BACKTRACE=0|n|no|off acts as if unset
previously only RUST_BACKTRACE=0 acted as if RUST_BACKTRACE was unset
Now added more options (case-insensitive): 'n','no' and 'off'
eg. RUST_BACKTRACE=oFF
/# This is the 5th commit message:
DRY on the value of 2
DRY=don't repeat yourself
Because having to remember to keep the two places of '2' in sync is not
ideal, even though this is a simple enough case.
/# This is the 6th commit message:
Revert "DRY on the value of 2"
This reverts commit 95a0479d5cf72a2b2d9d21ec0bed2823ed213fef.
Nevermind this DRY on 2, because we already have a RY on 1,
besides the code is less readable this way...
/# This is the 7th commit message:
attempt to document unsetting RUST_BACKTRACE
/# This is the 8th commit message:
curb allocations when checking for RUST_BACKTRACE
this means we don't check for case-insensitivity anymore
/# This is the 9th commit message:
as decided, RUST_BACKTRACE=0 turns off backtrace
/# This is the 10th commit message:
RUST_TEST_NOCAPTURE=0 acts as if unset
(that is, capture is on)
Any other value acts as if nocapture is enabled (that is, capture is off)
/# This is the 11th commit message:
update other RUST_TEST_NOCAPTURE occurrences
apparently only one place needs updating
/# This is the 12th commit message:
update RUST_BACKTRACE in man page
/# This is the 13th commit message:
handle an occurrence of RUST_BACKTRACE
/# This is the 14th commit message:
ensure consistency with new rules for backtrace
/# This is the 15th commit message:
a more concise comment for RUST_TEST_NOCAPTURE
/# This is the 16th commit message:
update RUST_TEST_NOCAPTURE in man page
Also adds a new set of passes to run just before translation that
"prepare" the MIR for codegen. Removal of landing pads, region erasure
and break critical edges are run in this pass.
Also fixes some merge/rebase errors.
Gate parser recovery via debugflag
Gate parser recovery via debugflag
Put in `-Z continue_parse_after_error`
This works by adding a method, `fn abort_if_no_parse_recovery`, to the
diagnostic handler in `syntax::errors`, and calling it after each
error is emitted in the parser.
(We might consider adding a debugflag to do such aborts in other
places where we are currently attempting recovery, such as resolve,
but I think the parser is the really important case to handle in the
face of #31994 and the parser bugs of varying degrees that were
injected by parse error recovery.)
r? @nikomatsakis
This works by adding a boolean flag, `continue_after_error`, to
`syntax::errors::Handler` that can be imperatively set to `true` or
`false` via a new `fn set_continue_after_error`.
The flag starts off true (since we generally try to recover from
compiler errors, and `Handler` is shared across all phases).
Then, during the `phase_1_parse_input`, we consult the setting of the
`-Z continue-parse-after-error` debug flag to determine whether we
should leave the flag set to `true` or should change it to `false`.
----
(We might consider adding a debugflag to do such aborts in other
places where we are currently attempting recovery, such as resolve,
but I think the parser is the really important case to handle in the
face of #31994 and the parser bugs of varying degrees that were
injected by parse error recovery.)
This is a fairly standard transform that inserts blocks along critical
edges so code can be inserted along the edge without it affecting other
edges. The main difference is that it considers a Drop or Call
terminator that would require an `invoke` instruction in LLVM a critical
edge. This is because we can't actually insert code after an invoke, so
it ends up looking similar to a critical edge anyway.
The transform is run just before translation right now.
Automated conversion using the untry tool [1] and the following command:
```
$ find -name '*.rs' -type f | xargs untry
```
at the root of the Rust repo.
[1]: https://github.com/japaric/untry
Move analysis for MIR borrowck
This PR adds code for doing MIR-based gathering of the moves in a `fn` and the dataflow to determine where uninitialized locations flow to, analogous to how the same thing is done in `borrowck`.
It also adds a couple attributes to print out graphviz visualizations of the analyzed MIR that includes the dataflow analysis results.
cc @nikomatsakis
emit (via debug!) scary message from `fn borrowck_mir` until basic
prototype is in place.
Gather children of move paths and set their kill bits in
dataflow. (Each node has a link to the child that is first among its
siblings.)
Hooked in libgraphviz based rendering, including of borrowck dataflow
state.
doing this well required some refactoring of the code, so I cleaned it
up more generally (adding comments to explain what its trying to do
and how it is doing it).
Update: this newer version addresses most review comments (at least
the ones that were largely mechanical changes), but I left the more
interesting revisions to separate followup commits (in this same PR).
Implement RFC 1210: impl specialization
This PR implements [impl specialization](https://github.com/rust-lang/rfcs/pull/1210),
carefully following the proposal laid out in the RFC.
The implementation covers the bulk of the RFC. The remaining gaps I know of are:
- no checking for lifetime-dependent specialization (a soundness hole);
- no `default impl` yet;
- no support for `default` with associated consts;
I plan to cover these gaps in follow-up PRs, as per @nikomatsakis's preference.
The basic strategy is to build up a *specialization graph* during
coherence checking. Insertion into the graph locates the right place
to put an impl in the specialization hierarchy; if there is no right
place (due to partial overlap but no containment), you get an overlap
error. Specialization is consulted when selecting an impl (of course),
and the graph is consulted when propagating defaults down the
specialization hierarchy.
You might expect that the specialization graph would be used during
selection -- i.e., when actually performing specialization. This is
not done for two reasons:
- It's merely an optimization: given a set of candidates that apply,
we can determine the most specialized one by comparing them directly
for specialization, rather than consulting the graph. Given that we
also cache the results of selection, the benefit of this
optimization is questionable.
- To build the specialization graph in the first place, we need to use
selection (because we need to determine whether one impl specializes
another). Dealing with this reentrancy would require some additional
mode switch for selection. Given that there seems to be no strong
reason to use the graph anyway, we stick with a simpler approach in
selection, and use the graph only for propagating default
implementations.
Trait impl selection can succeed even when multiple impls can apply,
as long as they are part of the same specialization family. In that
case, it returns a *single* impl on success -- this is the most
specialized impl *known* to apply. However, if there are any inference
variables in play, the returned impl may not be the actual impl we
will use at trans time. Thus, we take special care to avoid projecting
associated types unless either (1) the associated type does not use
`default` and thus cannot be overridden or (2) all input types are
known concretely.
r? @nikomatsakis
Add Pass manager for MIR
A new PR, since rebasing the original one (https://github.com/rust-lang/rust/pull/31448) properly was a pain. Since then there has been several changes most notable of which:
1. Removed the pretty-printing with `#[rustc_mir(graphviz/pretty)]`, mostly because we now have `--unpretty=mir`, IMHO that’s the direction we should expand this functionality into;
2. Reverted the infercx change done for typeck, because typeck can make an infercx for itself by being a `MirMapPass`
r? @nikomatsakis
Distinguish fn item types to allow reification from nothing to fn pointers.
The first commit is a rebase of #26284, except for files that have moved since.
This is a [breaking-change], due to:
* each FFI function has a distinct type, like all other functions currently do
* all generic parameters on functions are recorded in their item types, e.g.:
`size_of::<u8>` & `size_of::<i8>`'s types differ despite their identical signature.
* function items are zero-sized, which will stop transmutes from working on them
The first two cases are handled in most cases with the new coerce-unify logic,
which will combine incompatible function item types into function pointers,
at the outer-most level of if-else chains, match arms and array literals.
The last case is specially handled during type-checking such that transmutes
from a function item type to a pointer or integer type will continue to work for
another release cycle, but are being linted against. To get rid of warnings and
ensure your code will continue to compile, cast to a pointer before transmuting.
Fix incorrect trait privacy error
This PR fixes#21670 by using the crate metadata instead of `ExternalExports` to determine if an external item is public.
r? @nikomatsakis
There's a lot of stuff wrong with the representation of these types:
TyFnDef doesn't actually uniquely identify a function, TyFnPtr is used to
represent method calls, TyFnDef in the sub-expression of a cast isn't
correctly reified, and probably some other stuff I haven't discovered yet.
Splitting them seems like the right first step, though.
Show `cfg` as possible argument to `--print` and make it so that `--print cfg` also outputs the `target_feature`s.
Should I also extend `src/test/run-make/print-cfg/Makefile` to check that `target_feature`s are actually printed?
Gated cfg attributes are not available on the stable and beta release
channels, therefore they should not be presented to users of those
channels in order to avoid confusion.
The configuration returned by `config::build_configuration` needs to
be modified with `target_features::add_configuration` in order to also
contain the target features. This is already done for the
configuration used when compiling and when creating the documentation,
but was missing in the `cfg` printing code.
This commit adds support for *truly* unstable options in the compiler, as well
as adding warnings for the start of the deprecation path of
unstable-but-not-really options. Specifically, the following behavior is now in
place for handling unstable options:
* As before, an unconditional error is emitted if an unstable option is passed
and the `-Z unstable-options` flag is not present. Note that passing another
`-Z` flag does not require passing `-Z unstable-options` as well.
* New flags added to the compiler will be in the `Unstable` category as opposed
to the `UnstableButNotReally` category which means they will unconditionally
emit an error when used on stable.
* All current flags are in a category where they will emit warnings when used
that the option will soon be a hard error.
Also as before, it is intended that `-Z` is akin to `#![feature]` in a crate
where it is required to unlock unstable functionality. A nightly compiler which
is used without any `-Z` flags should only be exercising stable behavior.