Remove special-cased stable hashing for HIR module
All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.
Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.
However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.
This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
Currently the output of a command like `./x.py build --stage 0 library/std` is this:
```
Updating only changed submodules
Submodules updated in 0.02 seconds
extracting [...]
Compiling [...]
Finished dev [unoptimized] target(s) in 17.53s
Building stage0 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Compling [...]
Finished release [optimized + debuginfo] target(s) in 21.99s
Copying stage0 std from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Build completed successfully in 0:00:51
```
I find the part before the "Building stage0 std artifacts" a bit confusing.
After this commit, it looks like this:
```
Updating only changed submodules
Submodules updated in 0.02 seconds
extracting [...]
Building rustbuild
Compiling [...]
Finished dev [unoptimized] target(s) in 17.53s
Building stage0 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Compling [...]
Finished release [optimized + debuginfo] target(s) in 21.99s
Copying stage0 std from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Build completed successfully in 0:00:51
```
The "Building rustbuild" label makes it clear what the first cargo build
invocation is for. The indentation of the "Submodules updated" line
indicates it is a sub-step of a parent task.
This PR removes all of the `#[stable_hasher(project(name))]`
attributes used in HIR structs. While these attributes are not known
to be causing any issues in practice, we need to hash these in
order for the incremental system to work correctly -
a query could be otherwise be incorrectly marked green
when a change occures in one of the `Span`s that it uses.
- Do not `#[doc(hidden)]` the `#[derive]` macro attribute
- Add a link to the reference section to `derive`'s inherent docs
- Do the same for `#[test]` and `#[global_allocator]`
- Fix `GlobalAlloc` link (why is it on `core` and not `alloc`?)
- Try `no_inline`-ing the `std` reexports from `core`
- Revert "Try `no_inline`-ing the `std` reexports from `core`"
- Address PR review
- Also document the unstable macros
The documentation on `std::sync::mpsc::Iter` and `std::sync::mpsc::TryIter` provides links to the corresponding `Receiver` methods, unlike `std::sync::mpsc::IntoIter` does.
This was left out in c59b188aae
Related to #29377
Rollup of 6 pull requests
Successful merges:
- #90102 (Remove `NullOp::Box`)
- #92011 (Use field span in `rustc_macros` when emitting decode call)
- #92402 (Suggest while let x = y when encountering while x = y)
- #92409 (Couple of libtest cleanups)
- #92418 (Fix spacing in pretty printed PatKind::Struct with no fields)
- #92444 (Consolidate Result's and Option's methods into fewer impl blocks)
Failed merges:
- #92483 (Stabilize `result_cloned` and `result_copied`)
r? `@ghost`
`@rustbot` modify labels: rollup
Consolidate Result's and Option's methods into fewer impl blocks
`Result`'s and `Option`'s methods have historically been separated up into `impl` blocks based on their trait bounds, with the bounds specified on type parameters of the impl block. I find this unhelpful because closely related methods, like `unwrap_or` and `unwrap_or_default`, end up disproportionately far apart in source code and rustdocs:
<pre>
impl<T> Option<T> {
pub fn unwrap_or(self, default: T) -> T {
...
}
<img alt="one eternity later" src="https://user-images.githubusercontent.com/1940490/147780325-ad4e01a4-c971-436e-bdf4-e755f2d35f15.jpg" width="750">
}
impl<T: Default> Option<T> {
pub fn unwrap_or_default(self) -> T {
...
}
}
</pre>
I'd prefer for method to be in as few impl blocks as possible, with the most logical grouping within each impl block. Any bounds needed can be written as `where` clauses on the method instead:
```rust
impl<T> Option<T> {
pub fn unwrap_or(self, default: T) -> T {
...
}
pub fn unwrap_or_default(self) -> T
where
T: Default,
{
...
}
}
```
*Warning: the end-to-end diff of this PR is computed confusingly by git / rendered confusingly by GitHub; it's practically impossible to review that way. I've broken the PR into commits that move small groups of methods for which git behaves better — these each should be easily individually reviewable.*
Use field span in `rustc_macros` when emitting decode call
This will cause backtraces to point to the location of
the field in the struct/enum, rather than the derive macro.
This makes it clear which field was being decoded when the
backtrace was captured (which is especially useful if
there are multiple fields with the same type).
Remove `NullOp::Box`
Follow up of #89030 and MCP rust-lang/compiler-team#460.
~1 month later nothing seems to be broken, apart from a small regression that #89332 (1aac85bb716c09304b313d69d30d74fe7e8e1a8e) shows could be regained by remvoing the diverging path, so it shall be safe to continue and remove `NullOp::Box` completely.
r? `@jonas-schievink`
`@rustbot` label T-compiler
Add `#[rustc_clean(loaded_from_disk)]` to assert loading of query result
Currently, you can use `#[rustc_clean]` to assert to that a particular
query (technically, a `DepNode`) is green or red. However, a green
`DepNode` does not mean that the query result was actually deserialized
from disk - we might have never re-run a query that needed the result.
Some incremental tests are written as regression tests for ICEs that
occured during query result decoding. Using
`#[rustc_clean(loaded_from_disk="typeck")]`, you can now assert
that the result of a particular query (e.g. `typeck`) was actually
loaded from disk, in addition to being green.
No functional changes intended.
The LLVM commit
ec501f15a8
removed the signed version of `createExpression`. This adapts the Rust
LLVM wrappers accordingly.
Move `PatKind::Lit` checking from ast_validation to ast lowering
Fixes#92074
This allows us to insert an `ExprKind::Err` when an invalid expression
is used in a literal pattern, preventing later stages of compilation
from seeing an unexpected literal pattern.
Rustdoc: use ThinVec for GenericArgs bindings
The bindings are almost always empty. This reduces the size of `PathSegment` and `GenericArgs` by about one fourth.
According to MDN
(https://developer.mozilla.org/en-US/docs/Web/CSS/font-size),
> To maximize accessibility, it is generally best to use values that
> are relative to the user's default font size.
> Defining font sizes in px is not accessible, because the user cannot
> change the font size in some browsers.
Note that changing font size (in browser or OS settings) is distinct
from the zoom functionality triggered with Ctrl/Cmd-+. Zoom
functionality increases the size of everything on the page, effectively
applying a multiplier to all pixel sizes. Font size changes apply to
just text.
For relative font sizes, we could use `em`, as we do in several places
already. However that has a problem of "compounding" (see MDN article
for details). The compounding problem is nicely solved by `rem`, which
make font sizes relative to the root element, not the parent element.
Since we were using a hodge-podge of pixel sizes, em, rem, and
percentage sizes before, this change switching everything to rem, while
keeping the same size relative to our old default of 16px.
16px is still the default on most browsers, for users that haven't set a
larger or smaller font size.
Stabilize -Z symbol-mangling-version=v0 as -C symbol-mangling-version=v0
This allows selecting `v0` symbol-mangling without an unstable option. Selecting `legacy` still requires -Z unstable-options.
This does not change the default symbol-mangling-version. See https://github.com/rust-lang/rust/pull/89917 for a pull request changing the default. Rationale, from #89917:
Rust's current mangling scheme depends on compiler internals; loses information about generic parameters (and other things) which makes for a worse experience when using external tools that need to interact with Rust symbol names; is inconsistent; and can contain . characters which aren't universally supported. Therefore, Rust has defined its own symbol mangling scheme which is defined in terms of the Rust language, not the compiler implementation; encodes information about generic parameters in a reversible way; has a consistent definition; and generates symbols that only use the characters A-Z, a-z, 0-9, and _.
Support for the new Rust symbol mangling scheme has been added to upstream tools that will need to interact with Rust symbols (e.g. debuggers).
This pull request allows enabling the new v0 symbol-mangling-version.
See #89917 for references to the implementation of v0, and for references to the tool changes to decode Rust symbols.
Support [x; n] expressions in concat_bytes!
Currently trying to use `concat_bytes!` with a repeating array value like `[42; 5]` results in an error:
```
error: expected a byte literal
--> src/main.rs:3:27
|
3 | let x = concat_bytes!([3; 4]);
| ^^^^^^
|
= note: only byte literals (like `b"foo"`, `b's'`, and `[3, 4, 5]`) can be passed to `concat_bytes!()`
```
This makes it so repeating array syntax can be used the same way normal arrays can be. The RFC doesn't explicitly mention repeat expressions, but it seems reasonable to allow them as well, since normal arrays are allowed.
It is possible to make the compiler get stuck compiling forever with `concat_bytes!([3; 999999999])`, but I don't think that's much of an issue since you can do that already with `const X: [u8; 999999999] = [3; 999999999];`.
Contributes to #87555.
Track caller of slice split and swap
Improves error location for `slice.split_at*()` and `slice.swap()`.
These are generic inline functions, so the `#[track_caller]` on them is free — only changes a value of an argument already passed to panicking code.
Remove effect of `#[no_link]` attribute on name resolution
Previously it hid all non-macro names from other crates.
This has no relation to linking and can change name resolution behavior in some cases (e.g. glob conflicts), in addition to just producing the "unresolved name" errors.
I can kind of understand the possible reasoning behind the current behavior - if you can use names from a `no_link` crates then you can use, for example, functions too, but whether it will actually work or produce link-time errors will depend on random factors like inliner behavior.
(^^^ This is not the actual reason why the current behavior exist, I've looked through git history and it's mostly accidental.)
I think this risk is ok for such an obscure attribute, and we don't need to specifically prevent use of non-macro items from such crates.
(I'm not actually sure why would anyone use `#[no_link]` on a crate, even if it's macro only, if you aware of any use cases, please share. IIRC, at some point it was used for crates implementing custom derives - the now removed legacy ones, not the current proc macros.)
Extracted from https://github.com/rust-lang/rust/pull/91795.
Rollup of 7 pull requests
Successful merges:
- #84083 (Clarify the guarantees that ThreadId does and doesn't make.)
- #91593 (Remove unnecessary bounds for some Hash{Map,Set} methods)
- #92297 (Reduce compile time of rustbuild)
- #92332 (Add test for where clause order)
- #92438 (Enforce formatting for rustc_codegen_cranelift)
- #92463 (Remove pronunciation guide from Vec<T>)
- #92468 (Emit an error for `--cfg=)`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
This allows selecting `v0` symbol-mangling without an unstable option.
Selecting `legacy` still requires -Z unstable-options.
Continue supporting -Z symbol-mangling-version for compatibility for
now, but show a deprecation warning for it.
Emit an error for `--cfg=)`
Fixes#73026
See also: #64467, #89468
The issue stems from a `FatalError` being silently raised in
`panictry_buffer`. Normally this is not a problem, because
`panictry_buffer` emits the causes of the error, but they are not
themselves fatal, so they get filtered out by the silent emitter.
To fix this, we use a parser entrypoint which doesn't use
`panictry_buffer`, and we handle the error ourselves.
Add test for where clause order
I didn't use ``@snapshot`` because of the ` ` characters, it's much simpler doing it through rustdoc-gui testsuite.
r? `@camelid`
Reduce compile time of rustbuild
Best reviewed commit by commit. The `ignore` crate and it's dependencies are probably responsible for the majority of the compile time after this PR.
cc `@jyn514` as you got a couple of open rustbuild PR.
Remove unnecessary bounds for some Hash{Map,Set} methods
This PR moves `HashMap::{into_keys,into_values,retain}` and `HashSet::retain` from `impl` blocks with `K: Eq + Hash, S: BuildHasher` into the blocks without them. It doesn't seem to me there is any reason these methods need to be bounded by that. This change brings `HashMap::{into_keys,into_values}` on par with `HashMap::{keys,values,values_mut}` which are not bounded either.
Clarify the guarantees that ThreadId does and doesn't make.
The existing documentation does not spell out whether `ThreadId`s are unique during the lifetime of a thread or of a process. I had to examine the source code to realise (pleasingly!) that they're unique for the lifetime of a process. That seems worth documenting clearly, as it's a strong guarantee.
Examining the way `ThreadId`s are created also made me realise that the `as_u64` method on `ThreadId` could be a trap for the unwary on those platforms where the platform's notion of a thread identifier is also a 64 bit integer (particularly if they happen to use a similar identifier scheme to `ThreadId`). I therefore think it's worth being even clearer that there's no relationship between the two.
Remove CommandEnv::apply
It's not being used and uses unsound set_var and remove_var functions. This is an internal function that isn't exported (even with `process_internals` feature), so this shouldn't break anything.
Also see #92365. Note that this isn't the only use of those methods in standard library, so that particular pull request will need more changes than just this to work (in particular, `test_capture_env_at_spawn` is using `set_var` and `remove_var`).
Fixes#92074
This allows us to insert an `ExprKind::Err` when an invalid expression
is used in a literal pattern, preventing later stages of compilation
from seeing an unexpected literal pattern.
This slightly improves compilation time by reducing linking time
(saving about a 1/10 of the the total compilation time after
changing rustbuild) and slightly reduces disk usage (from 16MB for
the rustc wrapper to 4MB).
The task of the macro is simple enough that a decl macro is almost ten
times shorter than the original proc macro. The proc macro is 159 lines
while the decl macro is just 18 lines.
This reduces the amount of dependencies of rustbuild from 45 to 37. It
also slight reduces compilation time from 47s to 44s for debug builds.