[breaking-change] Technically breaking, since code that had been using
these transmutes before will no longer compile. However, it was
undefined behavior, so really, it's a good thing. Fixing your code would
require some re-working to use an UnsafeCell instead.
Closes#13146
The bug involves the incorrect logic for `core::num::flt2dec::decoder`.
This makes some numbers in the form of 2^n missing one final digits,
which breaks the bijectivity criterion. The regression tests have been
added, and f32 exhaustive test is rerun to get the updated result.
This is a fork of the flt2dec portion of rust-strconv [1] with
a necessary relicensing (the original code was licensed CC0-1.0).
Each module is accompanied with large unit tests, integrated
in this commit as coretest::num::flt2dec. This module is added
in order to replace the existing core::fmt::float method.
The forked revision of rust-strconv is from 2015-04-20, with a commit ID
9adf6d3571c6764a6f240a740c823024f70dc1c7.
[1] https://github.com/lifthrasiir/rust-strconv/
According to rust-lang/rfcs#235, `VecDeque` should have this method (`VecDeque` was called `RingBuf` at the time) but it was never implemented.
I marked this stable since "1.0.0" because it's stable in `Vec`.
Turns out that a verbatim path was leaking through to gcc via the PATH
environment variable (pointing to the bundled gcc provided by the main
distribution) which was wreaking havoc when gcc itself was run. The fix here is
to just stop passing verbatim paths down by adding more liberal uses of
`fix_windows_verbatim_for_gcc`.
Closes#25072
PR #24611 added these for other architectures, but missed
the `#[cfg(any(target_arch = "mips", target_arch = "mipsel"))]`
version of the module. The values are the same.
To prevent the reference grammar from getting out of sync with the real
grammar, panic if RustLexer.tokens contains an unknown token in a
similar way that verify.rs panics if it encounters an unknown binary
operation token.
There were some tokens used in the grammar but not declared. Antlr
doesn't really seem to care and happily uses them, but they appear in
RustLexer.tokens in a potentially-unexpected order.
This appears to not have too much of a detrimental effect, but it
doesn't seem to be what is intended either.
antlr doesn't mind that `PLUS` isn't declared in `tokens` and happily
uses the `PLUS` that appears later in the file, but the generated
RustLexer.tokens had PLUS at the end rather than where it was intended:
NOT=10
TILDE=11
PLUT=12
MINUS=13
...
PLUS=56
Many bounds are currently of the form `T: ?Sized + AsRef<OsStr>` where the
argument is `&T`, but the pattern elsewhere (primarily `std::fs`) has been to
remove the `?Sized` bound and take `T` instead (allowing usage with both
references and owned values). This commit generalizes the possible apis in
`std::env` from `&T` to `T` in this fashion.
The `split_paths` function remains the same as the return value borrows the
input value, so ta borrowed reference is required.
Turns out that a verbatim path was leaking through to gcc via the PATH
environment variable (pointing to the bundled gcc provided by the main
distribution) which was wreaking havoc when gcc itself was run. The fix here is
to just stop passing verbatim paths down by adding more liberal uses of
`fix_windows_verbatim_for_gcc`.
Closes#25072
I've added backticks in a few places to ensure correct highlighting in the HTML output (cf #25062).
Other changes include:
* Remove use of `1.` and `2.` separated by a code block as this was being rendered as two separate lists beginning at 1.
* Correct the spelling of successful in two places (from "succesful").
Other changes are a result of reflowing text to stay within the 80 character limit.
This also made me realize that I wasn't using the correct term,
'associated functions', rather than 'static methods'. So I corrected
that in the method syntax chapter.
This also made me realize that I wasn't using the correct term,
'associated functions', rather than 'static methods'. So I corrected
that in the method syntax chapter.
As pointed out in #17136 the semantics of a `BufStream` aren't always what one
expects, and it looks like other [languages like C#][c-sharp] implement a
buffered stream with only one underlying buffer. For now this commit
destabilizes the primitive in the `std::io` module to give us some more time in
figuring out what to do with it.
[c-sharp]: https://msdn.microsoft.com/en-us/library/system.io.bufferedstream%28v=vs.110%29.aspx
[breaking-change]
Specifically, make count, nth, and last call the corresponding methods
on the underlying iterator where possible. This way, if the underlying
iterator has an optimized count, nth, or last implementations (e.g.
slice::Iter), these methods will propagate these optimizations.
Additionally, change Skip::next to take advantage of a potentially
optimized nth method on the underlying iterator.
Coherence now allows this, we have SliceConcatExt<T> for [V] where T: Sized
+ Clone and SliceConcatExt<str> for [S], these don't conflict because
str is never Sized.
This test has deadlocked on Windows once or twice now and we've had lots of
problems in the past of threads panicking when the process is being shut down.
One of the two threads in this test is guaranteed to panic because of the
`.unwrap()` on the `send` calls, so just call `recv` on both receivers after the
test executes to ensure that both threads are dying/dead.
Make `span_to_lines` to return a `Result`.
(This is better than just asserting internally, since it allows caller
to decide if they can recover from the problem.)
Added type alias for `FileLinesResult` returned by `span_to_lines`.
Update embedded unit test to reflect `span_to_lines` signature change.
In diagnostic, catch `Err` from `span_to_lines` and print
`"(internal compiler error: unprintable span)"` instead.
----
There a number of recent issues that report the bug here. See
e.g. #24761 and #24954.
This change *might* fix them. However, that is not its main goal.
The main goals are:
1. Make it possible for callers to recover from an error here, and
2. Insert a more conservative check, in that we are
also checking that the files match up.