Commit Graph

911 Commits

Author SHA1 Message Date
Camille GILLOT
7f26191aed Make drop_flags an IndexVec. 2023-04-28 20:12:45 +00:00
bors
6ce22733b9 Auto merge of #110882 - BoxyUwU:rename-some-ty-flags, r=compiler-errors
rename `NEEDS_SUBST` and `NEEDS_INFER`

implements rust-lang/compiler-team#617
2023-04-27 09:55:37 +00:00
Boxy
842419712a rename needs_subst to has_param 2023-04-27 08:35:19 +01:00
bors
8b8110e146 Auto merge of #110728 - cjgillot:no-false-optes, r=oli-obk
Do not bother optimizing impossible functions.

This is currently checked by `ConstProp`, but I see no reason to restrict it to ConstProp only.
2023-04-27 04:29:49 +00:00
bors
9c044d77a3 Auto merge of #110822 - scottmcm:lower-offset-to-mir, r=compiler-errors
Lower `intrinsics::offset` to `mir::BinOp::Offset`

They're [semantically the same](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/mir/enum.Rvalue.html#variant.BinaryOp), so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.
2023-04-26 15:52:33 +00:00
bors
8763965a2c Auto merge of #97368 - tmandry:coverage-underflow, r=jyn514
coverage: Don't underflow column number

I noticed this when running coverage on a debug build of rustc. There
may be other places that do this but I'm just fixing the one I hit.

r? `@wesleywiser` `@richkadel`
2023-04-26 12:03:13 +00:00
Scott McMurray
05a665f21a Lower intrinsics::offset to mir::BinOp::Offset
They're semantically the same, so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.
2023-04-25 19:23:45 -07:00
Matthias Krüger
297b222066
Rollup merge of #110556 - kylematsuda:earlybinder-explicit-item-bounds, r=compiler-errors
Switch to `EarlyBinder` for `explicit_item_bounds`

Part of the work to finish https://github.com/rust-lang/rust/issues/105779.

This PR adds `EarlyBinder` to the return type of the `explicit_item_bounds` query and removes `bound_explicit_item_bounds`.

r? `@compiler-errors` (hope it's okay to request you, since you reviewed #110299 and #110498 😃)
2023-04-25 21:06:32 +02:00
Camille GILLOT
0ee32fb3c7 Move unstatisfaction check earlier. 2023-04-25 17:11:46 +00:00
Camille GILLOT
0f857791ad Fully clear the body. 2023-04-24 18:53:47 +00:00
Yuki Okushi
4d3ab3da4e
Rollup merge of #110685 - cjgillot:clean-dcp, r=oli-obk
Some cleanups to DataflowConstProp

Mostly moving code around and short-circuiting useless cases.
2023-04-25 02:33:30 +09:00
Maybe Waffle
e496fbec92 Split {Idx, IndexVec, IndexSlice} into their own modules 2023-04-24 13:53:35 +00:00
Matthias Krüger
2ce9b574a4
Rollup merge of #110714 - cjgillot:reveal-consts, r=oli-obk
Normalize types and consts in MIR opts.

Some passes were using a non-RevealAll param_env, which is needlessly restrictive in mir-opts.

As a drive-by, we normalize all constants, since just normalizing their types is not enough.
2023-04-24 07:53:25 +02:00
Matthias Krüger
3ecae2932c
Rollup merge of #110706 - scottmcm:transmute_unchecked, r=oli-obk
Add `intrinsics::transmute_unchecked`

This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.

Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.

It also simplifies a couple places in `core`.

See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
2023-04-24 07:53:25 +02:00
Camille GILLOT
15e5072147 Do not bother optimizing impossible functions. 2023-04-23 16:35:49 +00:00
bors
915aa06700 Auto merge of #110705 - saethlin:ignore-locals-cost, r=cjgillot
Remove the size of locals heuristic in MIR inlining

This heuristic doesn't necessarily correlate to complexity of the MIR Body. In particular, a lot of straight-line code in MIR tends to never reuse a local, even though any optimizer would effectively reuse the storage or just put everything in registers. So it doesn't even necessarily make sense that this would be a stack size heuristic.

So... what happens if we just delete the heuristic? The benchmark suite improves significantly. Less heuristics better?

r? `@cjgillot`
2023-04-23 15:41:45 +00:00
bors
3462f79e94 Auto merge of #108118 - oli-obk:lazy_typeck, r=cjgillot
Run various queries from other queries instead of explicitly in phases

These are just legacy leftovers from when rustc didn't have a query system. While there are more cleanups of this sort that can be done here, I want to land them in smaller steps.

This phased order of query invocations was already a lie, as any query that looks at types (e.g. the wf checks run before) can invoke e.g. const eval which invokes borrowck, which invokes typeck, ...
2023-04-23 13:34:31 +00:00
Camille GILLOT
4fe51365d7 Use param_env_reveal_all_normalized in MIR opts. 2023-04-23 10:04:41 +00:00
Camille GILLOT
7efcf67a3b Also reveal constants before MIR opts. 2023-04-23 10:04:41 +00:00
Scott McMurray
1de2257c3f Add intrinsics::transmute_unchecked
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.

Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.

It also simplifies a couple places in `core`.
2023-04-22 17:22:03 -07:00
Ben Kimock
173845ce0e Remove the size of locals heuristic in MIR inlining 2023-04-22 19:17:11 -04:00
Wesley Wiser
4e8b642646 Turn on ConstDebugInfo pass. 2023-04-22 23:41:48 +02:00
Camille GILLOT
dd78b997b5 Reduce rightward drift. 2023-04-22 12:30:37 +00:00
Camille GILLOT
dd452ae70e Simplify logic. 2023-04-22 12:30:36 +00:00
Camille GILLOT
629cdb42d3 Move eval_discriminant. 2023-04-22 12:30:36 +00:00
Camille GILLOT
0e68cbd45d Remove useless special case. 2023-04-22 12:30:36 +00:00
bors
21fab435da Auto merge of #104844 - cjgillot:mention-eval-place, r=jackh726,RalfJung
Evaluate place expression in `PlaceMention`

https://github.com/rust-lang/rust/pull/102256 introduces a `PlaceMention(place)` MIR statement which keep trace of `let _ = place` statements from surface rust, but without semantics.

This PR proposes to change the behaviour of `let _ =` patterns with respect to the borrow-checker to verify that the bound place is live.

Specifically, consider this code:
```rust
let _ = {
    let a = 5;
    &a
};
```

This passes borrowck without error on stable. Meanwhile, replacing `_` by `_: _` or `_p` errors with "error[E0597]: `a` does not live long enough", [see playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=c448d25a7c205dc95a0967fe96bccce8).

This PR *does not* change how `_` patterns behave with respect to initializedness: it remains ok to bind a moved-from place to `_`.

The relevant test is `tests/ui/borrowck/let_underscore_temporary.rs`. Crater check found no regression.

For consistency, this PR changes miri to evaluate the place found in `PlaceMention`, and report eventual dangling pointers found within it.

r? `@RalfJung`
2023-04-22 09:54:21 +00:00
bors
3128fd8ddf Auto merge of #110666 - JohnTitor:rollup-3pwilte, r=JohnTitor
Rollup of 7 pull requests

Successful merges:

 - #109949 (rustdoc: migrate `document_type_layout` to askama)
 - #110622 (Stable hash tag (discriminant) of `GenericArg`)
 - #110635 (More `IS_ZST` in `library`)
 - #110640 (compiler/rustc_target: Raise m68k-linux-gnu baseline to 68020)
 - #110657 (nit: consistent naming for SimplifyConstCondition)
 - #110659 (rustdoc: clean up JS)
 - #110660 (Print ty placeholders pretty)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2023-04-22 05:35:08 +00:00
bors
80a2ec49a4 Auto merge of #106934 - DrMeepster:offset_of, r=WaffleLapkin
Add offset_of! macro (RFC 3308)

Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:

* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)

I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.

cc `@thomcc` (RFC author)
2023-04-22 00:10:44 +00:00
Oli Scherer
1ce80e210d Allow LocalDefId as the argument to def_path_str 2023-04-21 22:27:20 +00:00
miguelraz
6f29a3c980 nit: consistent naming for SimplifyConstCondition 2023-04-21 15:45:25 -06:00
Camille GILLOT
2870d269f5 Actually keep PlaceMention if requested. 2023-04-21 21:34:59 +00:00
Kyle Matsuda
5a69b5d0f9 Changes from review 2023-04-21 09:57:37 -06:00
bors
4a03f14b09 Auto merge of #110569 - saethlin:mir-pass-cooperation, r=cjgillot
Deduplicate unreachable blocks, for real this time

In https://github.com/rust-lang/rust/pull/106428 (in particular 41eda69516) we noticed that inlining `unreachable_unchecked` can produce duplicate unreachable blocks. So we improved two MIR optimizations: `SimplifyCfg` was given a simplify to deduplicate unreachable blocks, then `InstCombine` was given a combiner to deduplicate switch targets that point at the same block. The problem is that change doesn't actually work.

Our current pass order is
```
SimplifyCfg (does nothing relevant to this situation)
Inline (produces multiple unreachable blocks)
InstCombine (doesn't do anything here, oops)
SimplifyCfg (produces the duplicate SwitchTargets that InstCombine is looking for)
```

So in here, I have factored out the specific function from `InstCombine` and placed it inside the simplify that produces the case it is looking for. This should ensure that it runs in the scenario it was designed for.

Fixes https://github.com/rust-lang/rust/issues/110551
r? `@cjgillot`
2023-04-21 15:08:02 +00:00
DrMeepster
511e457c4b offset_of 2023-04-21 02:14:02 -07:00
Ben Kimock
8ec49ad19a Run combine_duplicate_switch_targets after the simplification that produces them 2023-04-20 20:40:01 -04:00
Kyle Matsuda
e54854f6a9 add subst_identity_iter and subst_identity_iter_copied methods on EarlyBinder; use this to simplify some EarlyBinder noise around explicit_item_bounds calls 2023-04-20 12:36:50 -06:00
Kyle Matsuda
f3b279fcc5 add EarlyBinder to output of explicit_item_bounds; replace bound_explicit_item_bounds usages; remove bound_explicit_item_bounds query 2023-04-20 12:36:50 -06:00
Kyle Matsuda
0892a7380b change usages of explicit_item_bounds to bound_explicit_item_bounds 2023-04-20 12:36:50 -06:00
Camille GILLOT
b275d2c30b Remove WithOptconstParam. 2023-04-20 17:48:32 +00:00
Maybe Waffle
25b9263b34 Move GenericArgKind::as_{type,const,region} to GenericArg 2023-04-19 17:59:30 +00:00
Maybe Waffle
3f15521396 Add GenericArgKind::as_{type,const,region} 2023-04-19 14:54:31 +00:00
bors
9e7f72c57d Auto merge of #110477 - miguelraz:canoodling2-electric-boogaloo, r=compiler-errors
Don't allocate on SimplifyCfg/Locals/Const on every MIR pass

Hey! 👋🏾 This is a first PR attempt to see if I could speed up some rustc internals.

Thought process:

```rust
pub struct SimplifyCfg {
    label: String,
}
```
in [compiler/src/rustc_mir_transform/simplify.rs](7908a1d654/compiler/rustc_mir_transform/src/simplify.rs (L39)) fires multiple times per MIR analysis. This means that a likely string allocation is happening in each of these runs, which may add up, as they are not being  lazily allocated or cached in between the different passes.

...yes, I know that adding a global static array is probably not the future-proof solution, but I wanted to lob this now as a proof of concept to see if it's worth shaving off a few cycles and then making more robust.
2023-04-19 02:57:19 +00:00
bors
b3f1379509 Auto merge of #110083 - saethlin:encode-hashes-as-bytes, r=cjgillot
Encode hashes as bytes, not varint

In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.

Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`

Before:
```
(  1) 373418203 (53.7%, 53.7%): 1
(  2) 196240113 (28.2%, 81.9%): 3
(  3) 108157958 (15.6%, 97.5%): 2
(  4)  17213120 ( 2.5%, 99.9%): 4
(  5)    223614 ( 0.0%,100.0%): 9
(  6)    216262 ( 0.0%,100.0%): 10
(  7)     15447 ( 0.0%,100.0%): 5
(  8)      3633 ( 0.0%,100.0%): 19
(  9)      3030 ( 0.0%,100.0%): 8
( 10)      1167 ( 0.0%,100.0%): 18
( 11)      1032 ( 0.0%,100.0%): 7
( 12)      1003 ( 0.0%,100.0%): 6
( 13)        10 ( 0.0%,100.0%): 16
( 14)        10 ( 0.0%,100.0%): 17
( 15)         5 ( 0.0%,100.0%): 12
( 16)         4 ( 0.0%,100.0%): 14
```

After:
```
(  1) 372939136 (53.7%, 53.7%): 1
(  2) 196240140 (28.3%, 82.0%): 3
(  3) 108014969 (15.6%, 97.5%): 2
(  4)  17192375 ( 2.5%,100.0%): 4
(  5)       435 ( 0.0%,100.0%): 5
(  6)        83 ( 0.0%,100.0%): 18
(  7)        79 ( 0.0%,100.0%): 10
(  8)        50 ( 0.0%,100.0%): 9
(  9)         6 ( 0.0%,100.0%): 19
```

The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
2023-04-18 22:27:15 +00:00
miguelraz
fc27ae14f6 refactor SimlifyCfg and friends - no globals, just enums 2023-04-18 12:30:00 -06:00
Ben Kimock
0445fbdd83 Store hashes in special types so they aren't accidentally encoded as numbers 2023-04-18 10:52:47 -04:00
Josh Soref
e09d0d2a29 Spelling - compiler
* account
* achieved
* advising
* always
* ambiguous
* analysis
* annotations
* appropriate
* build
* candidates
* cascading
* category
* character
* clarification
* compound
* conceptually
* constituent
* consts
* convenience
* corresponds
* debruijn
* debug
* debugable
* debuggable
* deterministic
* discriminant
* display
* documentation
* doesn't
* ellipsis
* erroneous
* evaluability
* evaluate
* evaluation
* explicitly
* fallible
* fulfill
* getting
* has
* highlighting
* illustrative
* imported
* incompatible
* infringing
* initialized
* into
* intrinsic
* introduced
* javascript
* liveness
* metadata
* monomorphization
* nonexistent
* nontrivial
* obligation
* obligations
* offset
* opaque
* opportunities
* opt-in
* outlive
* overlapping
* paragraph
* parentheses
* poisson
* precisely
* predecessors
* predicates
* preexisting
* propagated
* really
* reentrant
* referent
* responsibility
* rustonomicon
* shortcircuit
* simplifiable
* simplifications
* specify
* stabilized
* structurally
* suggestibility
* translatable
* transmuting
* two
* unclosed
* uninhabited
* visibility
* volatile
* workaround

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2023-04-17 16:09:18 -04:00
Matthias Krüger
a76b157624
Rollup merge of #110434 - compiler-errors:issue-110171, r=oli-obk
Check freeze with right param-env in `deduced_param_attrs`

We're checking if a trait (`Freeze`) holds in a polymorphic function, but not using that function's own (reveal-all) param-env. This causes us to try to eagerly normalize a specializable projection type that has no default value, which causes an ICE.

Fixes #110171
2023-04-17 18:13:36 +02:00
Matthias Krüger
1795bf8222
Rollup merge of #110404 - matthiaskrgr:mapmap, r=Nilstrieb
fix clippy::toplevel_ref_arg and ::manual_map

r? ``@Nilstrieb``
2023-04-17 08:09:40 +02:00
bors
5546cb64f6 Auto merge of #109247 - saethlin:inline-without-inline, r=oli-obk
Permit MIR inlining without #[inline]

I noticed that there are at least a handful of portable-simd functions that have no `#[inline]` but compile to an assign + return.

I locally benchmarked inlining thresholds between 0 and 50 in increments of 5, and 50 seems to be the best. Interesting. That didn't include check builds though, ~maybe perf will have something to say about that~.

Perf has little useful to say about this. We generally regress all the check builds, as best as I can tell, due to a number of small codegen changes in a particular hot function in the compiler. Probably this is because we've nudged the inlining outcomes all over, and uses of `#[inline(always)]`/`#[inline(never)]` might need to be adjusted.
2023-04-17 02:36:38 +00:00