Move jointness censoring to proc_macro
Proc-macro API currently exposes jointness in `Punct` tokens. That is,
`+` in `+one` is **non** joint.
Our lexer produces jointness info for all tokens, so we need to censor
it *somewhere*
Previously we did this in a lexer, but it makes more sense to do this
in a proc-macro server.
r? @petrochenkov
inliner: Check for codegen fn attributes compatibility
* Check for target features compatibility
* Check for no_sanitize attribute compatibility
Fixes#76259.
Refactor byteorder to std in rustc_middle
Use std::io::{Read, Write} and {to, from}_{le, be}_bytes methods in
order to remove byteorder from librustc_middle's dependency graph.
do not apply DerefMut on union field
This implements the part of [RFC 2514](https://github.com/rust-lang/rfcs/blob/master/text/2514-union-initialization-and-drop.md) about `DerefMut`. Unlike described in the RFC, we only apply this warning specifically when doing `DerefMut` of a `ManuallyDrop` field; that is really the case we are worried about here.
@matthewjasper suggested I patch `convert_place_derefs_to_mutable` and `convert_place_op_to_mutable` for this, but I could not find anything to do in `convert_place_op_to_mutable` and this is sufficient to make the test pass. However, maybe there are some other cases this misses? I have no familiarity with this code.
This is a breaking change *in theory*, if someone used `ManuallyDrop<T>` in a union field and relied on automatic `DerefMut`. But on stable this means `T: Copy`, so the `ManuallyDrop` is rather pointless.
Cc https://github.com/rust-lang/rust/issues/55149
Tools, tests, and experimenting with MIR-derived coverage counters
Leverages the new mir_dump output file in HTML+CSS (from #76074) to visualize coverage code regions
and the MIR features that they came from (including overlapping spans).
See example below.
The `run-make-fulldeps/instrument-coverage` test has been refactored to maximize test coverage and reduce code duplication. The new tests support testing with and without `-Clink-dead-code`, so Rust coverage can be tested on MSVC (which, currently, only works with `link-dead-code` _disabled_).
New tests validate coverage region generation and coverage reports with multiple counters per function. Starting with a simple `if-else` branch tests, coverage tests for each additional syntax type can be added by simply dropping in a new Rust sample program.
Includes a basic, MIR-block-based implementation of coverage injection,
available via `-Zexperimental-coverage`. This implementation has known
flaws and omissions, but is simple enough to validate the new tools and
tests.
The existing `-Zinstrument-coverage` option currently enables
function-level coverage only, which at least appears to generate
accurate coverage reports at that level.
Experimental coverage is not accurate at this time. When branch coverage
works as intended, the `-Zexperimental-coverage` option should be
removed.
This PR replaces the bulk of PR #75828, with the remaining parts of
that PR distributed among other separate and indentpent PRs.
This PR depends on two of those other PRs: #76002, #76003 and #76074
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
![Screen-Recording-2020-08-21-at-2](https://user-images.githubusercontent.com/3827298/90972923-ff417880-e4d1-11ea-92bb-8713c6198f6d.gif)
r? @tmandry
FYI: @wesleywiser
diagnostics: shorten paths of unique symbols
This is a step towards implementing a fix for #50310, and continuation of the discussion in [Pre-RFC: Nicer Types In Diagnostics - compiler - Rust Internals](https://internals.rust-lang.org/t/pre-rfc-nicer-types-in-diagnostics/11139). Impressed upon me from previous discussion in #21934 that an RFC for this is not needed, and I should just come up with code.
The recent improvements to `use` suggestions that I've contributed have given rise to this implementation. Contrary to previous suggestions, it's rather simple logic, and I believe it only reduces the amount of cognitive load that a developer would need when reading type errors.
-----
If a symbol name can only be imported from one place, and as long as it was not glob-imported anywhere in the current crate, we can trim its printed path to the last component.
This has wide implications on error messages with types, for example, shortening `std::vec::Vec` to just `Vec`, as long as there is no other `Vec` importable from anywhere.
Proc-macro API currently exposes jointness in `Punct` tokens. That is,
`+` in `+one` is **non** joint.
Our lexer produces jointness info for all tokens, so we need to censor
it *somewhere*
Previously we did this in a lexer, but it makes more sense to do this
in a proc-macro server.
Adds a new mir_dump output file in HTML/CSS to visualize code regions
and the MIR features that they came from (including overlapping spans).
See example below:
Includes a basic, MIR-block-based implementation of coverage injection,
available via `-Zexperimental-coverage`. This implementation has known
flaws and omissions, but is simple enough to validate the new tools and
tests.
The existing `-Zinstrument-coverage` option currently enables
function-level coverage only, which at least appears to generate
accurate coverage reports at that level.
Experimental coverage is not accurate at this time. When branch coverage
works as intended, the `-Zexperimental-coverage` option should be
removed.
This PR replaces the bulk of PR #75828, with the remaining parts of
that PR distributed among other separate and indentpent PRs.
This PR depends on three of those other PRs: #76000, #76002, and
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
![Screen-Recording-2020-08-21-at-2](https://user-images.githubusercontent.com/3827298/90972923-ff417880-e4d1-11ea-92bb-8713c6198f6d.gif)
`run_compiler` is used by clippy and other tools, which should not have
the trimmed paths feature enabled by default, until we see it works well
for them.
Would also be nice to rename `TimePassesCallbacks` however it's a
submodule change.
inliner: Avoid query cycles when optimizing generators
The HIR Id trick is insufficient to prevent query cycles when optimizing
generators, since merely requesting a layout of a generator also
computes its `optimized_mir`.
Make no attempts to inline functions into generators within the same
crate to avoid query cycles.
Fixes#76181.
Avoid rehashing Fingerprint as a map key
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.
cc #56308
r? @eddyb
Improve recovery on malformed format call
The token following a format expression should be a comma. However, when it is replaced with a similar token (such as a dot), then the corresponding error is emitted, but the token is treated as a comma, and the parsing step continues.
r? @petrochenkov
If a symbol name can only be imported from one place for a type, and
as long as it was not glob-imported anywhere in the current crate, we
can trim its printed path and print only the name.
This has wide implications on error messages with types, for example,
shortening `std::vec::Vec` to just `Vec`, as long as there is no other
`Vec` importable anywhere.
This adds a new '-Z trim-diagnostic-paths=false' option to control this
feature.
On the good path, with no diagnosis printed, we should try to avoid
issuing this query, so we need to prevent trimmed_def_paths query on
several cases.
This change also relies on a previous commit that differentiates
between `Debug` and `Display` on various rustc types, where the latter
is trimmed and presented to the user and the former is not.
The first use case of this detection of regression for trimmed paths
computation, that is in the case of rustc, which should be computed only
in case of errors or warnings.
Our current user of this method is deeply nested, being a side effect
from `Display` formatting on lots of rustc types. So taking only the
caller to the error message is not enough - we should collect the
traceback instead.
While formatting for user diagnostics used `Display` for all most cases,
some small amount of cases used `Debug` instead. Until now, `Display`
and `Debug` yielded the same output for many types. However, with path
trimming, we want to show a shorter path for the user, these cases need
fixing.
compiler: use `OnceCell` from std
Fixes#76192
The only remaining direct use of `lazy_static` crate is in `src/bootstrap` but I am not sure how I can remove that dependency for now.
r? @matklad
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.
lexer: Tiny improvement to shebang detection
Lexer now discerns between regular comments and doc comments, so use that.
The change only affects the choice of reported errors.
Give a better error message for duplicate built-in macros
Minor follow-up to https://github.com/rust-lang/rust/pull/75176 giving a better error message for duplicate builtin macros. This would have made it a little easier to debug.
r? @petrochenkov
Add new `-Z dump-mir-spanview` option
Similar to `-Z dump-mir-graphviz`, this adds the option to write
HTML+CSS files that allow users to analyze the spans associated with MIR
elements (by individual statement, just terminator, or overall basic
block).
This PR was split out from PR #76004, and exposes an API for spanview
HTML+CSS files that is also used to analyze code regions chosen for
coverage instrumentation (in a follow-on PR).
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
r? @tmandry
FYI @wesleywiser