Fix overlap detection of `usize`/`isize` range patterns
`usize` and `isize` are a bit of a special case in the match usefulness algorithm, because the range of values they contain depends on the platform. Specifically, we don't want `0..usize::MAX` to count as an exhaustive match (see also [`precise_pointer_size_matching`](https://github.com/rust-lang/rust/issues/56354)). The way this was initially implemented is by treating those ranges like float ranges, i.e. with limited cleverness. This means we didn't catch the following as unreachable:
```rust
match 0usize {
0..10 => {},
10..20 => {},
5..15 => {}, // oops, should be detected as unreachable
_ => {},
}
```
This PRs fixes this oversight. Now the only difference between `usize` and `u64` range patterns is in what ranges count as exhaustive.
r? `@varkor`
`@rustbot` label +A-exhaustiveness-checking
Don't run `resolve_vars_if_possible` in `normalize_erasing_regions`
Neither `@eddyb` nor I could figure out what this was for. I changed it to `assert_eq!(normalized_value, infcx.resolve_vars_if_possible(&normalized_value));` and it passed the UI test suite.
<details><summary>
Outdated, I figured out the issue - `needs_infer()` needs to come _after_ erasing the lifetimes
</summary>
Strangely, if I change it to `assert!(!normalized_value.needs_infer())` it panics almost immediately:
```
query stack during panic:
#0 [normalize_generic_arg_after_erasing_regions] normalizing `<str::IsWhitespace as str::pattern::Pattern>::Searcher`
#1 [needs_drop_raw] computing whether `str::iter::Split<str::IsWhitespace>` needs drop
#2 [mir_built] building MIR for `str::<impl str>::split_whitespace`
#3 [unsafety_check_result] unsafety-checking `str::<impl str>::split_whitespace`
#4 [mir_const] processing MIR for `str::<impl str>::split_whitespace`
#5 [mir_promoted] processing `str::<impl str>::split_whitespace`
#6 [mir_borrowck] borrow-checking `str::<impl str>::split_whitespace`
#7 [analysis] running analysis passes on this crate
end of query stack
```
I'm not entirely sure what's going on - maybe the two disagree?
</details>
For context, this came up while reviewing https://github.com/rust-lang/rust/pull/77467/ (cc `@lcnr).`
Possibly this needs a crater run?
r? `@nikomatsakis`
cc `@matthewjasper`
Support repr(simd) on ADTs containing a single array field
This is a squash and rebase of `@gnzlbg's` #63531
I've never actually written code in the compiler before so just fumbled my way around until it would build 😅
I imagine there'll be some work we need to do in `rustc_codegen_cranelift` too for this now, but might need some input from `@bjorn3` to know what that is.
cc `@rust-lang/project-portable-simd`
-----
This PR allows using `#[repr(simd)]` on ADTs containing a single array field:
```rust
#[repr(simd)] struct S0([f32; 4]);
#[repr(simd)] struct S1<const N: usize>([f32; N]);
#[repr(simd)] struct S2<T, const N: usize>([T; N]);
```
This should allow experimenting with portable packed SIMD abstractions on nightly that make use of const generics.
Extend doc keyword feature by allowing any ident
Part of #51315.
As suggested by ``@danielhenrymantilla`` in [this comment](https://github.com/rust-lang/rust/issues/51315#issuecomment-733879934), this PR extends `#[doc(keyword = "...")]` to allow any ident to be used as keyword. The final goal is to allow (proc-)macro crates' owners to write documentation of the keywords they might introduce.
r? ``@jyn514``
Sync rustc_codegen_cranelift
This implements a few extra simd intrinsics, fixes yet another 128bit bug and updates a few dependencies. It also fixes an cg_clif subtree update that did compile, but that caused a panic when compiling libcore. Other than that this is mostly cleanups.
`@rustbot` modify labels: +A-codegen +A-cranelift +T-compiler
rustc_parse: fix ConstBlock expr span
The span for a ConstBlock expression should presumably run through the end of the block it contains and not stop at the keyword, just like is done with similar block-containing expression kinds, such as a TryBlock
Properly handle attributes on statements
We now collect tokens for the underlying node wrapped by `StmtKind`
nstead of storing tokens directly in `Stmt`.
`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.
Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.
Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
Split match exhaustiveness into two files
I feel the constructor-related things in the `_match` module make enough sense on their own so I split them off. It makes `_match` feel less like a complicated mess. I'm not aware of PRs in progress against this module apart from my own so hopefully I'm not annoying too many people.
I have a lot of questions about the conventions in naming and modules around the compiler. Like, why is the module named `_match`? Could I rename it to `usefulness` maybe? Should `deconstruct_pat` be a submodule of `_match` since only `_match` uses it? Is it ok to move big piles of code around even if it makes git blame more difficult?
r? `@varkor`
`@rustbot` modify labels: +A-exhaustiveness-checking
Add support for Arm64 Catalyst on ARM Macs
This is an iteration on https://github.com/rust-lang/rust/pull/63467 which was merged a while ago. In the aforementioned PR, I added support for the `X86_64-apple-ios-macabi` target triple, which is Catalyst, iOS apps running on macOS.
Very soon, Apple will launch ARM64 based Macs which will introduce `aarch64_apple_darwin.rs`, macOS apps using the Darwin ABI running on ARM. This PR adds support for Catalyst apps on ARM Macs: iOS apps compiled for the darwin ABI.
I don't have access to a Apple Developer Transition Kit (DTK), so I can't really test if the generated binaries work correctly. I'm vaguely hopeful that somebody with access to a DTK could give this a spin.
This preserves the current lint behavior for now.
Linting after item statements currently prevents the compiler from bootstrapping.
Fixing this is blocked on fixing this upstream in Cargo, and bumping the Cargo
submodule.
When parsing a statement (e.g. inside a function body),
we now consider `struct Foo {};` and `$stmt;` to each consist
of two statements: `struct Foo {}` and `;`, and `$stmt` and `;`.
As a result, an attribute macro invoke as
`fn foo() { #[attr] struct Bar{}; }` will see `struct Bar{}` as its
input. Additionally, the 'unused semicolon' lint now fires in more
places.
We now collect tokens for the underlying node wrapped by `StmtKind`
instead of storing tokens directly in `Stmt`.
`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.
Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.
Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
Cache pretty-print/retokenize result to avoid compile time blowup
Fixes#79242
If a `macro_rules!` recursively builds up a nested nonterminal
(passing it to a proc-macro at each step), we will end up repeatedly
pretty-printing/retokenizing the same nonterminals. Unfortunately, the
'probable equality' check we do has a non-trivial cost, which leads to a
blowup in compilation time.
As a workaround, we cache the result of the 'probable equality' check,
which eliminates the compilation time blowup for the linked issue. This
commit only touches a single file (other than adding tests), so it
should be easy to backport.
The proper solution is to remove the pretty-print/retokenize hack
entirely. However, this will almost certainly break a large number of
crates that were relying on hygiene bugs created by using the reparsed
`TokenStream`. As a result, we will definitely not want to backport
such a change.