rustc_parse: Remove optimization for 0-length streams in `collect_tokens`
The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.
r? `@Aaron1011`
This allows constant propagation to evaluate `size_of` and `wrapping_*`,
and unreachable propagation to propagate a call to `unreachable`.
The lowering is performed as a MIR optimization, rather than during MIR
building to preserve the special status of intrinsics with respect to
unsafety checks and promotion.
Add type to `ConstKind::Placeholder`
I simply threaded `<'tcx>` through everything that required it. I'm not sure whether this is the correct thing to do, but it seems to work.
r? `@nikomatsakis`
While reading some parts of the pretty printer code, I noticed this old comment
which seemed out of place. The `use_once_payload` this outdated comment mentions
was removed in 2017 in 40f03a1e0d6702add1922f82d716d5b2c23a59f0, so this
completes the work by removing the comment.
During inlining, the callee body is normalized and has types revealed,
but some of locals corresponding to the arguments might come from the
caller body which is not. As a result the caller body does not pass
validation without additional normalization.
The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.
The inliner looks if a sanitizer is enabled before considering
`no_sanitize` attribute as possible source of incompatibility.
The MIR inlining could happen in a crate with sanitizer disabled, but
code generation in a crate with sanitizer enabled, thus the attribute
would be incorrectly ignored.
To avoid the issue never inline functions with different `no_sanitize`
attributes.
Add asm register information for SPIR-V
As discussed in [zulip](https://rust-lang.zulipchat.com/#narrow/stream/182449-t-compiler.2Fhelp/topic/Defining.20asm!.20for.20new.20architecture), we at [rust-gpu](https://github.com/EmbarkStudios/rust-gpu) would like to support `asm!` for our SPIR-V backend. However, we cannot do so purely without frontend support: [this match](d4ea0b3e46/compiler/rustc_target/src/asm/mod.rs (L185)) fails and so `asm!` is not supported ([error reported here](d4ea0b3e46/compiler/rustc_ast_lowering/src/expr.rs (L1095))). To resolve this, we need to stub out register information for SPIR-V to support getting the `asm!` content all the way to [`AsmBuilderMethods::codegen_inline_asm`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/trait.AsmBuilderMethods.html#tymethod.codegen_inline_asm), at which point the rust-gpu backend can do all the parsing and codegen that is needed.
This is a pretty weird PR - adding support for a backend that isn't in-tree feels pretty gross to me, but I don't see an easy way around this. ``@Amanieu`` said I should submit it anyway, so, here we are! Let me know if this needs to go through a more formal process (MCP?) and what I should do to help this along.
I based this off the [wasm asm PR](https://github.com/rust-lang/rust/pull/78684), which unfortunately this PR conflicts with that one quite a bit, sorry for any merge conflict pain :(
---
Some open questions:
- What do we call the register class? Some context, SPIR-V is an SSA-based IR, there are "instructions" that create IDs (referred to as `<id>` in the spec), which can be referenced by other instructions. So, `reg` isn't exactly accurate, they're SSA IDs, not re-assignable registers.
- What happens when a SPIR-V register gets to the LLVM backend? Right now it's a `bug!`, but should that be a `sess.fatal()`? I'm not sure if it's even possible to reach that point, maybe there's a check that prevents the `spirv` target from even reaching that codepath.
Implement destructuring assignment for structs and slices
This is the second step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the second part of #71156, which was split up to allow for easier review.
Note that the first PR (#78748) is not merged yet, so it is included as the first commit in this one. I thought this would allow the review to start earlier because I have some time this weekend to respond to reviews. If ``@petrochenkov`` prefers to wait until the first PR is merged, I totally understand, of course.
This PR implements destructuring assignment for (tuple) structs and slices. In order to do this, the following *parser change* was necessary: struct expressions are not required to have a base expression, i.e. `Struct { a: 1, .. }` becomes legal (in order to act like a struct pattern).
Unfortunately, this PR slightly regresses the diagnostics implemented in #77283. However, it is only a missing help message in `src/test/ui/issues/issue-77218.rs`. Other instances of this diagnostic are not affected. Since I don't exactly understand how this help message works and how to fix it yet, I was hoping it's OK to regress this temporarily and fix it in a follow-up PR.
Thanks to ``@varkor`` who helped with the implementation, particularly around the struct rest changes.
r? ``@petrochenkov``
Reusing bindings causes errors later in lowering:
```
error[E0596]: cannot borrow `vec` as mutable, as it is not declared as mutable
--> /checkout/src/test/ui/async-await/argument-patterns.rs:12:20
|
LL | async fn b(n: u32, ref mut vec: A) {
| ^^^^^^^^^^^
| |
| cannot borrow as mutable
| help: consider changing this to be mutable: `mut vec`
```
incr-comp: hash and serialize span end line/column
Hash both the length and the end location (line/column) of a span. If we
hash only the length, for example, then two otherwise equal spans with
different end locations will have the same hash. This can cause a
problem during incremental compilation wherein a previous result for a
query that depends on the end location of a span will be incorrectly
reused when the end location of the span it depends on has changed. A
similar analysis applies if some query depends specifically on the
length of the span, but we only hash the end location. So hash both.
Fix#46744, fix#59954, fix#63161, fix#73640, fix#73967, fix#74890, fix#75900
---
See #74890 for a more in-depth analysis.
I haven't thought about what other problems this root cause could be responsible for. Please let me know if anything springs to mind. I believe the issue has existed since the inception of incremental compilation.