Remove adjacent all-const match arm hack.
An old fix for moves-in-guards had a hack for adjacent all-const match arms.
The hack was explained in a comment, which you can see here:
https://github.com/rust-lang/rust/pull/22580/files#diff-402a0fa4b3c6755c5650027c6d4cf1efR497
But hack was incomplete (and thus unsound), as pointed out here:
https://github.com/rust-lang/rust/issues/47295#issuecomment-357108458
Plus, it is likely to be at least tricky to reimplement this hack in
the new NLL borrowck.
So rather than try to preserve the hack, we want to try to just remove
it outright. (At least to see the results of a crater run.)
[breaking-change]
This is a breaking-change, but our hope is that no one is actually
relying on such an extreme special case. (We hypothesize the hack was
originally added to accommodate a file in our own test suite, not code
in the wild.)
Extend two-phase borrows to apply to method receiver autorefs
Fixes#48598 by permitting two-phase borrows on the autorefs created when functions and methods.
Add Iterator::find_map
I'd like to propose to add `find_map` method to the `Iterator`: an occasionally useful utility, which relates to `filter_map` in the same way that `find` relates to `filter`.
`find_map` takes an `Option`-returning function, applies it to the elements of the iterator, and returns the first non-`None` result. In other words, `find_map(f) == filter_map(f).next()`.
Why do we want to add a function to the `Iterator`, which can be trivially expressed as a combination of existing ones? Observe that `find(f) == filter(f).next()`, so, by the same logic, `find` itself is unnecessary!
The more positive argument is that desugaring of `find[_map]` in terms of `filter[_map]().next()` is not super obvious, because the `filter` operation reads as if it is applies to the whole collection, although in reality we are interested only in the first element. That is, the jump from "I need a **single** result" to "let's use a function which maps **many** values to **many** values" is a non-trivial speed-bump, and causes friction when reading and writing code.
Does the need for `find_map` arise in practice? Yes!
* Anecdotally, I've more than once searched the docs for the function with `[T] -> (T -> Option<U>) -> Option<U>` signature.
* The direct cause for this PR was [this](1291c50e86 (r174934173)) discussion in Cargo, which boils down to "there's some pattern that we try to express here, but current approaches looks non-pretty" (and the pattern is `filter_map`
* There are several `filter_map().next` combos in Cargo: [[1]](545a4a2c93/src/cargo/ops/cargo_new.rs (L585)), [[2]](545a4a2c93/src/cargo/core/resolver/mod.rs (L1130)), [[3]](545a4a2c93/src/cargo/ops/cargo_rustc/mod.rs (L1086)).
* I've also needed similar functionality in `Kotlin` several times. There, it is expressed as `mapNotNull {}.firstOrNull`, as can be seen [here](ee8bdb4e07/src/main/kotlin/org/rust/cargo/project/model/impl/CargoProjectImpl.kt (L154)), [here](ee8bdb4e07/src/main/kotlin/org/rust/lang/core/resolve/ImplLookup.kt (L444)) [here](ee8bdb4e07/src/main/kotlin/org/rust/ide/inspections/RsLint.kt (L38)) and [here](ee8bdb4e07/src/main/kotlin/org/rust/cargo/toolchain/RustToolchain.kt (L74)) (and maybe in some other cases as well)
Note that it is definitely not among the most popular functions (it definitely is less popular than `find`), but, for example it (in case of Cargo) seems to be more popular than `rposition` (1 occurrence), `step_by` (zero occurrences) and `nth` (three occurrences as `nth(0)` which probably should be replaced with `next`).
Do we necessary need this function in `std`? Could we move it to itertools? That is possible, but observe that `filter`, `filter_map`, `find` and `find_map` together really form a complete table:
|||
|-------|---------|
| filter| find|
|filter_map|find_map|
It would be somewhat unsatisfying to have one quarter of this table live elsewhere :) Also, if `Itertools` adds an `find_map` method, it would be more difficult to move it to std due to name collision.
Hm, at this point I've searched for `filter_map` the umpteenth time, and, strangely, this time I do find this RFC: https://github.com/rust-lang/rfcs/issues/1801. I guess this could be an implementation though? :)
To sum up:
Pro:
- complete the symmetry with existing method
- codify a somewhat common non-obvious pattern
Contra:
- niche use case
- we can, and do, live without it
Bump to 1.27.0
Also update some `Cargo.lock` dependencies, finishing up some final steps of our
[release process]!
This doesn't update the bootstrap compiler just yet but that will come in a
follow-up PR.
[release process]: https://forge.rust-lang.org/release-process.html
This commit is a reorganization of the `proc_macro` crate's public user-facing
API. This is the result of a number of discussions at the recent Rust All-Hands
where we're hoping to get the `proc_macro` crate into ship shape for
stabilization of a subset of its functionality in the Rust 2018 release.
The reorganization here is motivated by experiences from the `proc-macro2`,
`quote`, and `syn` crates on crates.io (and other crates which depend on them).
The main focus is future flexibility along with making a few more operations
consistent and/or fixing bugs. A summary of the changes made from today's
`proc_macro` API is:
* The `TokenNode` enum has been removed and the public fields of `TokenTree`
have also been removed. Instead the `TokenTree` type is now a public enum
(what `TokenNode` was) and each variant is an opaque struct which internally
contains `Span` information. This makes the various tokens a bit more
consistent, require fewer wrappers, and otherwise provides good
future-compatibility as opaque structs are easy to modify later on.
* `Literal` integer constructors have been expanded to be unambiguous as to what
they're doing and also allow for more future flexibility. Previously
constructors like `Literal::float` and `Literal::integer` were used to create
unsuffixed literals and the concrete methods like `Literal::i32` would create
a suffixed token. This wasn't immediately clear to all users (the
suffixed/unsuffixed aspect) and having *one* constructor for unsuffixed
literals required us to pick a largest type which may not always be true. To
fix these issues all constructors are now of the form
`Literal::i32_unsuffixed` or `Literal::i32_suffixed` (for all integral types).
This should allow future compatibility as well as being immediately clear
what's suffixed and what isn't.
* Each variant of `TokenTree` internally contains a `Span` which can also be
configured via `set_span`. For example `Literal` and `Term` now both
internally contain a `Span` rather than having it stored in an auxiliary
location.
* Constructors of all tokens are called `new` now (aka `Term::intern` is gone)
and most do not take spans. Manufactured tokens typically don't have a fresh
span to go with them and the span is purely used for error-reporting
**except** the span for `Term`, which currently affects hygiene. The default
spans for all these constructed tokens is `Span::call_site()` for now.
The `Term` type's constructor explicitly requires passing in a `Span` to
provide future-proofing against possible hygiene changes. It's intended that a
first pass of stabilization will likely only stabilize `Span::call_site()`
which is an explicit opt-in for "I would like no hygiene here please". The
intention here is to make this explicit in procedural macros to be
forwards-compatible with a hygiene-specifying solution.
* Some of the conversions for `TokenStream` have been simplified a little.
* The `TokenTreeIter` iterator was renamed to `token_stream::IntoIter`.
Overall the hope is that this is the "final pass" at the API of `TokenStream`
and most of `TokenTree` before stabilization. Explicitly left out here is any
changes to `Span`'s API which will likely need to be re-evaluated before
stabilization.
All changes in this PR have already been reflected to the [`proc-macro2`],
`quote`, and `syn` crates. New versions of all these crates have also been
published to crates.io.
Once this lands in nightly I plan on making an internals post again summarizing
the changes made here and also calling on all macro authors to give the APIs a
spin and see how they work. Hopefully pending no major issues we can then have
an FCP to stabilize later this cycle!
[`proc-macro2`]: https://docs.rs/proc-macro2/0.3.1/proc_macro2/
Also update some `Cargo.lock` dependencies, finishing up some final steps of our
[release process]!
This doesn't update the bootstrap compiler just yet but that will come in a
follow-up PR.
[release process]: https://forge.rust-lang.org/release-process.html
Ideally I'd like to soon enable sccache for rustbuild itself and some of the
stage0 tools, but for that to work we'll need some better Rust support than the
pretty old version we were previously using!
Easy edition feature flag
We no longer gate features on epochs; instead we have a `#![feature(rust_2018_preview)]` that flips on a bunch of features (currently dyn_trait).
Based on #49001 to avoid merge conflicts
r? @nikomatsakis
Expand Attributes on Statements and Expressions
This enables attribute-macro expansion on statements and expressions while retaining the `stmt_expr_attributes` feature requirement for attributes on expressions.
closes#41475
cc #38356 @petrochenkov @jseyfried
r? @nrc
Use Alloc and Layout from core::heap.
94d1970bba moved the alloc::allocator
module to core::heap, moving e.g. Alloc and Layout out of the alloc
crate. While alloc::heap reexports them, it's better to use them from
where they really come from.
94d1970bba moved the alloc::allocator
module to core::heap, moving e.g. Alloc and Layout out of the alloc
crate. While alloc::heap reexports them, it's better to use them from
where they really come from.
ef8804ba27 addressed #30170 by rejecting
huge alignments at the allocator API level, transforming a specific
platform bug/limitation into an enforced API limitation on all
platforms.
This change essentially reverts that commit, and instead makes alloc()
itself return AllocErr::Unsupported when receiving huge alignments.
This was discussed in https://github.com/rust-lang/rust/issues/32838#issuecomment-368348408
and following.
vec![0; n], via implementations of SpecFromElem, has an optimization
that uses with_capacity_zeroed instead of with_capacity, which will use
calloc instead of malloc, and avoid an extra memset.
This adds the same optimization for vec![ptr::null(); n] and
vec![ptr::null_mut(); n], assuming their bit value is 0 (which is true
on all currently supported platforms).
This does so by adding an intermediate trait IsZero, which looks very
much like nonzero::Zeroable, but that one is on the way out, and doesn't
apply to pointers anyways.
Adding such a trait allows to avoid repeating the logic using
with_capacity_zeroed or with_capacity, or making the macro more complex
to support generics.
This makes sure that all bits in each IdxSet between the universe length
and the end of the word are all zero instead of being in an indeterminate state.
This fixes a crash with RUST_LOG=rustc_mir, and is probably a good idea
anyway.
rustdoc: add an --edition flag to compile docs/doctests with a certain edition
To correspond with the 2018 edition, this adds a (currently unstable) `--edition` flag to rustdoc that makes it compile crates and doctests with the given edition. Once this lands, Cargo should be updated to pass this flag when the edition configuration option is given.
Fix escaped backslash in windows file not found message
When a module is declared, but no matching file exists, rustc gives
an error like `help: name the file either foo.rs or foo/mod.rs inside
the directory "src/bar"`. However, at on windows, the backslash was
double-escaped when naming the directory.
It did this because the string was printed in debug mode (`"{:?}"`) to
surround it with quotes. However, it should just be printed like any
other directory in an error message and surrounded by escaped quotes,
rather than relying on the debug print to add quotes (`"\"{}\""`).
I also checked the test suite to see if this output is being correctly tested. It's not - it only tests up to the word "directory". Presumably this is so that the test is not dependent on its exact position in the source tree. I don't know a better way to test this, unless the test suite supports regex?
Clarify network byte order conversions for integer / IP address conversions.
Opened primarily to address https://github.com/rust-lang/rust/issues/48819.
Also added a few other conversion docs/examples.
proc_macro: Tweak doc comments and negative literals
This commit tweaks the tokenization of a doc comment to use `#[doc = "..."]`
like `macro_rules!` does (instead of treating it as a `Literal` token).
Additionally it fixes treatment of negative literals in the compiler, for
exapmle `Literal::i32(-1)`. The current fix is a bit of a hack around the
current compiler implementation, providing a fix at the proc-macro layer rather
than the libsyntax layer.
Closes#48889