Add `array::IntoIter::{empty, from_raw_parts}`
`array::IntoIter` has a bunch of really handy logic for dealing with partial arrays, but it's currently hamstrung by only being creatable from a fully-initialized array.
This PR adds two new constructors:
- a safe & const `empty`, since `[].into_iter()` can only give `IntoIter<T, 0>`, not `IntoIter<T, N>`.
- an unsafe `from_raw_parts`, to allow experimentation with new uses.
(Slice & vec iterators don't need `from_raw_parts` because you `from_raw_parts` the slice or vec instead, but there's no useful way to made a `<[T; N]>::from_raw_parts`, so I think this is a reasonable place to have one.)
Fix AnonConst ICE
I am not sure if this is even the correct place to fix this issue, but i went down the path where the generic args came from and i wasn't able to find a clear cause for this down there. But if anybody has a suggestion what i should do, just tell me.
This fixes: https://github.com/rust-lang/rust/issues/91267
Add test for evaluate_obligation: Ok(EvaluatedToOkModuloRegions) ICE
Adds the minimial repro test case from #85360. The fix for #85360 was
supposed to be #85868 however the repro was resolved in the 2021-07-05
nightly while #85868 didn't land until 2021-09-03. The reason for that
is d34a3a401b **also** resolves that
issue.
To test if #85868 actually fixes#85360, I reverted
d34a3a401b and found that #85868 does
indeed resolve#85360.
With that question resolved, add a test case to our incremental test
suite for the original Ok(EvaluatedToOkModuloRegions) ICE.
Thanks to ````@lqd```` for helping track this down!
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix#90326 by itself yet, as .llvmbc will need to be
handled separately.
Replace dominators algorithm with simple Lengauer-Tarjan
This PR replaces our dominators implementation with that of the simple Lengauer-Tarjan algorithm, which is (to my knowledge and research) the currently accepted 'best' algorithm. The more complex variant has higher constant time overheads, and Semi-NCA (which is arguably a variant of Lengauer-Tarjan too) is not the preferred variant by the first paper cited in the documentation comments: simple Lengauer-Tarjan "is less sensitive to pathological instances, we think it should be preferred where performance guarantees are important" - which they are for us.
This work originally arose from noting that the keccak benchmark spent a considerable portion of its time (both instructions and cycles) in the dominator computations, which sparked an interest in potentially optimizing that code. The current algorithm largely proves slow on long "parallel" chains where the nearest common ancestor lookup (i.e., the intersect function) does not quickly identify a root; it is also inherently a pointer-chasing algorithm so is relatively slow on modern CPUs due to needing to hit memory - though usually in cache - in a tight loop, which still costs several cycles.
This was replaced with a bitset-based algorithm, previously studied in literature but implemented directly from dataflow equations in our case, which proved to be a significant speed up on the keccak benchmark: 20% instruction count wins, as can be seen in [this performance report](https://perf.rust-lang.org/compare.html?start=377d1a984cd2a53327092b90aa1d8b7e22d1e347&end=542da47ff78aa462384062229dad0675792f2638). This algorithm is also relatively simple in comparison to other algorithms and is easy to understand. However, these performance results showed a regression on a number of other benchmarks, and I was unable to get the bitsets to perform well enough that those regressions could be fully mitigated. The implementation "attempt" is seen here in the first commit, and is intended to be kept primarily so that future optimizers do not repeat that path (or can easily refer to the attempt).
The final version of this PR chooses the simple Lengauer-Tarjan algorithm, and implements it along with a number of optimizations found in literature. The current implementation is a slight improvement for many benchmarks, with keccak still being an outlier at ~20%. The implementation in this PR first implements the most basic variant of the algorithm directly from the pseudocode on page 16, physical, or 28 in the PDF of the first paper ("Linear-Time Algorithms for Dominators and Related Problems"). This is then followed by a number of commits which update the implementation to apply various performance improvements, as suggested by the paper. Finally, the last commit annotates the implementation with a number of comments, mostly drawn from the paper, which intend to help readers understand what is going on - these are incomplete without the paper, but writing them certainly helped my understanding. They may be helpful if future optimization attempts are attempted, so I chose to add them in.
When we point at a binding to suggest giving it a type, erase all the
type for ADTs that have been resolved, leaving only the ones that could
not be inferred. For small shallow types this is not a problem, but for
big nested types with lots of params, this can otherwise cause a lot of
unnecessary visual output.
Update Clippy
Since RLS is now already broken #91543 , we shouldn't be blocked by it anymore. I plan to do the RLS update once new rustc-ap packages are released.
r? `@Manishearth`
This largely avoids remapping from and to the 'real' indices, with the exception
of predecessor lookup and the final merge back, and is conceptually better.
As the paper indicates, the unprocessed vertices in the DFS tree and processed
vertices are disjoint, and we can use them in the same space, tracking only the index
of the split.
This replaces the previous implementation with the simple variant of
Lengauer-Tarjan, which performs better in the general case. Performance on the
keccak benchmark is about equivalent between the two, but we don't see
regressions (and indeed see improvements) on other benchmarks, even on a
partially optimized implementation.
The implementation here follows that of the pseudocode in "Linear-Time
Algorithms for Dominators and Related Problems" thesis by Loukas Georgiadis. The
next few commits will optimize the implementation as suggested in the thesis.
Several related works are cited in the comments within the implementation, as
well.
Implement the simple Lengauer-Tarjan algorithm
This replaces the previous implementation (from #34169), which has not been
optimized since, with the simple variant of Lengauer-Tarjan which performs
better in the general case. A previous attempt -- not kept in commit history --
attempted a replacement with a bitset-based implementation, but this led to
regressions on perf.rust-lang.org benchmarks and equivalent wins for the keccak
benchmark, so was rejected.
The implementation here follows that of the pseudocode in "Linear-Time
Algorithms for Dominators and Related Problems" thesis by Loukas Georgiadis. The
next few commits will optimize the implementation as suggested in the thesis.
Several related works are cited in the comments within the implementation, as
well.
On the keccak benchmark, we were previously spending 15% of our cycles computing
the NCA / intersect function; this function is quite expensive, especially on
modern CPUs, as it chases pointers on every iteration in a tight loop. With this
commit, we spend ~0.05% of our time in dominator computation.
Also add a test case for inserting a semicolon on extern fns.
Without this fix, we got an error like this:
error: expected one of `->`, `where`, or `{`, found `}`
--> chk.rs:3:1
|
2 | fn foo()
| --- - expected one of `->`, `where`, or `{`
| |
| while parsing this `fn`
3 | }
| ^ unexpected token
Since this is inside an extern block, you're required to write function
prototypes with no body. This fixes a regression, and adds a test case
for it.
since the serialization format isn't self-describing we need a way to detect
when encoder and decoder don't match up. but that doesn't have to
be utf8 validation for strings, which does cost a few % of performance.
Instead we can use a marker byte at the end to be reasonably
sure that we're dealing with a string and it wasn't overwritten in some
way.
Stop enabling `in_band_lifetimes` in rustc_data_structures
There's a conversation started in the tracking issue about possibly unaccepting `in_band_lifetimes`, but it's used heavily in the compiler, and thus there'd need to be a bunch of PRs like this if that were to happen.
So here's one to see how much of an impact it has. For this crate, at least, it doesn't seem like in-band was a big win -- about half the places that were using it didn't even need a named lifetime.
(Oh, and I removed `nll` while I was here too, since it didn't seem needed. Let me know if I should put that back.)
r? `@petrochenkov`
`array::IntoIter` has a bunch of really handy logic for dealing with partial arrays, but it's currently hamstrung by only being creatable from a fully-initialized array.
This PR adds two new constructors:
- a safe & const `empty`, since `[].into_iter()` gives `<T, 0>`, not `<T, N>`.
- an unsafe `from_raw_parts`, to allow experimentation with new uses.
(Slice & vec iterators don't need `from_raw_parts` because you `from_raw_parts` the slice or vec instead, but there's no useful way to made a `<[T; N]>::from_raw_parts`, so I think this is a reasonable place to have one.)
Delete duplicated helpers from HIR printer
These functions (`cbox`, `nbsp`, `word_nbsp`, `head`, `bopen`, `space_if_not_bol`, `break_offset_if_not_bol`, `synth_comment`, `maybe_print_trailing_comment`, `print_remaining_comments`) are duplicated with identical behavior across the AST printer and HIR printer, but are not specific to AST or HIR data structures.
Fix some false negatives for [`single_char_pattern`]
*Please write a short comment explaining your change (or "none" for internal only changes)*
changelog: Fix some false negatives for [`single_char_pattern`]
I noticed that clippy wasn't complaining about my usage of `split_once("x")` in a personal project so I updated the list of functions.
I had to update the test case for an unrelated issue because replace is now included in the list of functions to be linted.
There's a conversation in the tracking issue about possibly unaccepting `in_band_lifetimes`, but it's used heavily in the compiler, and thus there'd need to be a bunch of PRs like this if that were to happen.
So here's one to see how much of an impact it has.
(Oh, and I removed `nll` while I was here too, since it didn't seem needed. Let me know if I should put that back.)