This PR adds a program which uses the JSON output from #24884 to generate a webpage with descriptions of each diagnostic error.
The page is constructed by hand, with calls to `rustdoc`'s markdown renderers where needed. I opted to generate HTML directly as I think it's more flexible than generating a markdown file and feeding it into the `rustdoc` executable. I envision adding the ability to filter errors by their properties (description, no description, used, unused), which is infeasible using the whole-file markdown approach due to the need to wrap each error in a `<div>` (markdown inside tags isn't rendered).
Architecturally, I wasn't sure how to add this generator to the distribution. For the moment I've settled on a separate Rust program in `src/etc/` that gets compiled and run by a custom makefile rule. This approach doesn't seem too hackish, but I'm unsure if my usage of makefile variables is correct, particularly the call to `rustc` (and the `LD_LIBRARY_PATH` weirdness). Other options I considered were:
* Integrate the error-index generator into `rustdoc` so that it gets invoked via a flag and can be built as part of `rustdoc`.
* Add the error-index-generator as a "tool" to the `TOOLS` array, and make use of the facilities for building tools. The main reason I didn't do this was because it seemed like I'd need to add lots of stuff. I'm happy to investigate this further if it's the preferred method.
Specifically, make count, nth, and last call the corresponding methods on the underlying iterator where possible. This way, if the underlying iterator has an optimized count, nth, or last implementations (e.g. slice::Iter), these methods will propagate these optimizations.
Additionally, change Skip::next to take advantage of a potentially optimized nth method on the underlying iterator.
This covers:
* core::iter::Chain: count, last, nth
* core::iter::Enumerate: count, nth
* core::iter::Peekable: count, last, nth
* core::iter::Skip: count, last, next (should call nth), nth
* core::iter::Take: nth
* core::iter::Fuse: count, last, nth
of #24214.
[breaking-change] Technically breaking, since code that had been using
these transmutes before will no longer compile. However, it was
undefined behavior, so really, it's a good thing. Fixing your code would
require some re-working to use an UnsafeCell instead.
Closes#13146
According to rust-lang/rfcs#235, `VecDeque` should have this method (`VecDeque` was called `RingBuf` at the time) but it was never implemented.
I marked this stable since "1.0.0" because it's stable in `Vec`.
Turns out that a verbatim path was leaking through to gcc via the PATH
environment variable (pointing to the bundled gcc provided by the main
distribution) which was wreaking havoc when gcc itself was run. The fix here is
to just stop passing verbatim paths down by adding more liberal uses of
`fix_windows_verbatim_for_gcc`.
Closes#25072
To prevent the reference grammar from getting out of sync with the real
grammar, panic if RustLexer.tokens contains an unknown token in a
similar way that verify.rs panics if it encounters an unknown binary
operation token.
There were some tokens used in the grammar but not declared. Antlr
doesn't really seem to care and happily uses them, but they appear in
RustLexer.tokens in a potentially-unexpected order.
This appears to not have too much of a detrimental effect, but it
doesn't seem to be what is intended either.
antlr doesn't mind that `PLUS` isn't declared in `tokens` and happily
uses the `PLUS` that appears later in the file, but the generated
RustLexer.tokens had PLUS at the end rather than where it was intended:
NOT=10
TILDE=11
PLUT=12
MINUS=13
...
PLUS=56
Many bounds are currently of the form `T: ?Sized + AsRef<OsStr>` where the
argument is `&T`, but the pattern elsewhere (primarily `std::fs`) has been to
remove the `?Sized` bound and take `T` instead (allowing usage with both
references and owned values). This commit generalizes the possible apis in
`std::env` from `&T` to `T` in this fashion.
The `split_paths` function remains the same as the return value borrows the
input value, so ta borrowed reference is required.
Turns out that a verbatim path was leaking through to gcc via the PATH
environment variable (pointing to the bundled gcc provided by the main
distribution) which was wreaking havoc when gcc itself was run. The fix here is
to just stop passing verbatim paths down by adding more liberal uses of
`fix_windows_verbatim_for_gcc`.
Closes#25072
I've added backticks in a few places to ensure correct highlighting in the HTML output (cf #25062).
Other changes include:
* Remove use of `1.` and `2.` separated by a code block as this was being rendered as two separate lists beginning at 1.
* Correct the spelling of successful in two places (from "succesful").
Other changes are a result of reflowing text to stay within the 80 character limit.
This also made me realize that I wasn't using the correct term,
'associated functions', rather than 'static methods'. So I corrected
that in the method syntax chapter.
This also made me realize that I wasn't using the correct term,
'associated functions', rather than 'static methods'. So I corrected
that in the method syntax chapter.
As pointed out in #17136 the semantics of a `BufStream` aren't always what one
expects, and it looks like other [languages like C#][c-sharp] implement a
buffered stream with only one underlying buffer. For now this commit
destabilizes the primitive in the `std::io` module to give us some more time in
figuring out what to do with it.
[c-sharp]: https://msdn.microsoft.com/en-us/library/system.io.bufferedstream%28v=vs.110%29.aspx
[breaking-change]
Specifically, make count, nth, and last call the corresponding methods
on the underlying iterator where possible. This way, if the underlying
iterator has an optimized count, nth, or last implementations (e.g.
slice::Iter), these methods will propagate these optimizations.
Additionally, change Skip::next to take advantage of a potentially
optimized nth method on the underlying iterator.
Coherence now allows this, we have SliceConcatExt<T> for [V] where T: Sized
+ Clone and SliceConcatExt<str> for [S], these don't conflict because
str is never Sized.
This test has deadlocked on Windows once or twice now and we've had lots of
problems in the past of threads panicking when the process is being shut down.
One of the two threads in this test is guaranteed to panic because of the
`.unwrap()` on the `send` calls, so just call `recv` on both receivers after the
test executes to ensure that both threads are dying/dead.
This did not render as intended:
>This is defined in RFC 5737 - 192.0.2.0/24 (TEST-NET-1) - 198.51.100.0/24 (TEST-NET-2) - 203.0.113.0/24 (TEST-NET-3)
vs.
> This is defined in RFC 5737
- 192.0.2.0/24 (TEST-NET-1)
- 198.51.100.0/24 (TEST-NET-2)
- 203.0.113.0/24 (TEST-NET-3)