There's now one unified way to return things from a macro, instead of
being able to choose the `AnyMacro` trait or the `MRItem`/`MRExpr`
variants of the `MacResult` enum. This does simplify the logic handling
the expansions, but the biggest value of this is it makes macros in (for
example) type position easier to implement, as there's this single thing
to modify.
By my measurements (using `-Z time-passes` on libstd and librustc etc.),
this appears to have little-to-no impact on expansion speed. There are
presumably larger costs than the small number of extra allocations and
virtual calls this adds (notably, all `macro_rules!`-defined macros have
not changed in behaviour, since they had to use the `AnyMacro` trait
anyway).
This bug was introduced in #13384 by accident, and this commit continues the
work of #13384 by finishing support for loading a syntax extension crate without
registering it with the local cstore.
Closes#13495
A mismatched type with more type parameters than the expected one causes
`typeck` looking up out of the bound of type parameter vector, which
leads to ICE.
Closes#13466
This is a series of inter-related commits which depend on #13402 (Prune the paths that do not appear in the index). Please consider this as an early review request; I'll rebase this when the parent PR get merged and rebase is required.
----
This PR aims at reducing the search index without removing the actual information. In my measurement with both library and compiler docs, the search index is 52% smaller before gzipped, and 16% smaller after gzipped:
```
1719473 search-index-old.js
1503299 search-index.js (after #13402, 13% gain)
724955 search-index-new.js (after this PR, 52% gain w.r.t. #13402)
262711 search-index-old.js.gz
214205 search-index.js.gz (after #13402, 18.5% gain)
179396 search-index-new.js.gz (after this PR, 16% gain w.r.t. #13402)
```
Both the uncompressed and compressed size of the search index have been accounted. While the former would be less relevant when #12597 (Web site should be transferring data compressed) is resolved, the uncompressed index will be around for a while anyway and directly affects the UX of docs. Moreover, LZ77 (and gzip) can only remove *some* repeated strings (since its search window is limited in size), so optimizing for the uncompressed size often has a positive effect on the compressed size as well.
Each commit represents the following incremental improvements, in the order:
1. Parent paths were referred by its AST `NodeId`, which tends to be large. We don't need the actual node ID, so we remap them to the smaller sequential numbers. This also means that the list of paths can be a flat array instead of an object.
2. We remap each item type to small predefined numbers. This is strictly intended to reduce the uncompressed size of the search index.
3. We use arrays instead of objects and reconstruct the original objects in the JavaScript code. Since this removes a lot of boilerplates, this affects both the uncompressed and compressed size.
4. (I've found that a centralized `searchIndex` is easier to handle in JS, so I shot one global variable down.)
5. Finally, the repeated paths in the consecutive items are omitted (replaced by an empty string). This also greatly affects both the uncompressed and compressed size.
There had been several unsuccessful attempts to reduce the search index. Especially, I explicitly avoided complex optimizations like encoding paths in a compressed form, and only applied the optimizations when it had a substantial gain compared to the changes. Also, while I've tried to be careful, the lack of proper (non-smoke) tests makes me a bit worry; any advice on testing the search indices would be appreciated.
Since the items roughly follow the lexical order, there are
many consecutive items with the same path value which can be
easily compressed.
For the library and compiler docs, this commit decreases
the index size by 26% and 6% before and after gzip, respectively.
`buildIndex` JS function recovers them into the original object form.
This greatly reduces the size of the uncompressed search index (27%),
while this effect is less visible after gzipped (~5%).
Closures did not have their bounds printed at all, nor their lifetimes. Trait
bounds were also printed in angle brackets rather than after a colon with a '+'
inbetween them.
Note that on the current task::spawn [1] documentation page, there is no mention
of a `Send` bound even though it is crucially important!
[1] - http://static.rust-lang.org/doc/master/std/task/fn.task.html
This bug was introduced in #13384 by accident, and this commit continues the
work of #13384 by finishing support for loading a syntax extension crate without
registering it with the local cstore.
Closes#13495
Closures did not have their bounds printed at all, nor their lifetimes. Trait
bounds were also printed in angle brackets rather than after a colon with a '+'
inbetween them.
Note that on the current task::spawn [1] documentation page, there is no mention
of a `Send` bound even though it is crucially important!
[1] - http://static.rust-lang.org/doc/master/std/task/fn.task.html
The current error message is misleading, it asks users to add `#[feature(..)]` which ends up being treated as an outer attribute, which then has no error unless `attribute_usage` lint is enforced. The code will still fail and the user might not understand why.
This fixes two separate issues related to character encoding.
* Add `encode_utf16` to the `Char` trait, analogous to `encode_utf8`. `&str` already supports UTF-16 encoding but only with a heap allocation. Also fix `encode_utf8` docs and add tests.
* Correctly decode non-BMP hex escapes in JSON (#13064).
Previously, all slices derived from a vector whose values were of size 0 had a
null pointer as the 'data' pointer on the slice. This caused first pointer to be
yielded during iteration to always be the null pointer. Due to the null pointer
optimization, this meant that the first return value was None, instead of
Some(&T).
This commit changes slice construction from a Vec instance to use a base pointer
of 1 if the values have zero size. This means that the iterator will never
return null, and the iteration will proceed appropriately.
Closes#13467
Previously, upstream C libraries were linked in a nondeterministic fashion
because they were collected through iter_crate_data() which is a nodeterministic
traversal of a hash map. When upstream rlibs had interdependencies among their
native libraries (such as libfoo depending on libc), then the ordering would
occasionally be wrong, causing linkage to fail.
This uses the topologically sorted list of libraries to collect native
libraries, so if a native library depends on libc it just needs to make sure
that the rust crate depends on liblibc.
After removing absolute rpaths, cross compile builds (notably the nightly
builders) broke. This is because the RPATH was pointing at an empty directory
because only the rustc binary is copied over, not all of the target libraries.
This modifies the cross compile logic to fixup the rpath of the stage0
cross-compiled rustc to point to where it came from.
Rust advertises itself as being compatible with linux 2.6.18, but the timerfd
set of syscalls weren't added until linux 2.6.25. There is no real need for a
specialized timer implementation beyond being a "little more accurate", but the
select() implementation will suffice for now.
If it is later deemed that an accurate timerfd implementation is needed, it can
be added then through some method which will allow the standard distribution to
continue to be compatible with 2.6.18
Closes#13447
Rust advertises itself as being compatible with linux 2.6.18, but the timerfd
set of syscalls weren't added until linux 2.6.25. There is no real need for a
specialized timer implementation beyond being a "little more accurate", but the
select() implementation will suffice for now.
If it is later deemed that an accurate timerfd implementation is needed, it can
be added then through some method which will allow the standard distribution to
continue to be compatible with 2.6.18
Closes#13447
There are currently a number of return values from the std::comm methods, not
all of which are necessarily completely expressive:
* `Sender::try_send(t: T) -> bool`
This method currently doesn't transmit back the data `t` if the send fails
due to the other end having disconnected. Additionally, this shares the name
of the synchronous try_send method, but it differs in semantics in that it
only has one failure case, not two (the buffer can never be full).
* `SyncSender::try_send(t: T) -> TrySendResult<T>`
This method accurately conveys all possible information, but it uses a
custom type to the std::comm module with no convenience methods on it.
Additionally, if you want to inspect the result you're forced to import
something from `std::comm`.
* `SyncSender::send_opt(t: T) -> Option<T>`
This method uses Some(T) as an "error value" and None as a "success value",
but almost all other uses of Option<T> have Some/None the other way
* `Receiver::try_recv(t: T) -> TryRecvResult<T>`
Similarly to the synchronous try_send, this custom return type is lacking in
terms of usability (no convenience methods).
With this number of drawbacks in mind, I believed it was time to re-work the
return types of these methods. The new API for the comm module is:
Sender::send(t: T) -> ()
Sender::send_opt(t: T) -> Result<(), T>
SyncSender::send(t: T) -> ()
SyncSender::send_opt(t: T) -> Result<(), T>
SyncSender::try_send(t: T) -> Result<(), TrySendError<T>>
Receiver::recv() -> T
Receiver::recv_opt() -> Result<T, ()>
Receiver::try_recv() -> Result<T, TryRecvError>
The notable changes made are:
* Sender::try_send => Sender::send_opt. This renaming brings the semantics in
line with the SyncSender::send_opt method. An asychronous send only has one
failure case, unlike the synchronous try_send method which has two failure
cases (full/disconnected).
* Sender::send_opt returns the data back to the caller if the send is guaranteed
to fail. This method previously returned `bool`, but then it was unable to
retrieve the data if the data was guaranteed to fail to send. There is still a
race such that when `Ok(())` is returned the data could still fail to be
received, but that's inherent to an asynchronous channel.
* Result is now the basis of all return values. This not only adds lots of
convenience methods to all return values for free, but it also means that you
can inspect the return values with no extra imports (Ok/Err are in the
prelude). Additionally, it's now self documenting when something failed or not
because the return value has "Err" in the name.
Things I'm a little uneasy about:
* The methods send_opt and recv_opt are not returning options, but rather
results. I felt more strongly that Option was the wrong return type than the
_opt prefix was wrong, and I coudn't think of a much better name for these
methods. One possible way to think about them is to read the _opt suffix as
"optionally".
* Result<T, ()> is often better expressed as Option<T>. This is only applicable
to the recv_opt() method, but I thought it would be more consistent for
everything to return Result rather than one method returning an Option.
Despite my two reasons to feel uneasy, I feel much better about the consistency
in return values at this point, and I think the only real open question is if
there's a better suffix for {send,recv}_opt.
Closes#11527
A mismatched type with more type parameters than the expected one causes
`typeck` looking up out of the bound of type parameter vector, which
leads to ICE.
Closes#13466
Same representation change performed with path::unix.
This also implements BytesContainer for StrBuf & adds an (unsafe) method
for viewing & mutating the raw byte vector of a StrBuf.
Previously, all slices derived from a vector whose values were of size 0 had a
null pointer as the 'data' pointer on the slice. This caused first pointer to be
yielded during iteration to always be the null pointer. Due to the null pointer
optimization, this meant that the first return value was None, instead of
Some(&T).
This commit changes slice construction from a Vec instance to use a base pointer
of 1 if the values have zero size. This means that the iterator will never
return null, and the iteration will proceed appropriately.
Closes#13467
Previously, upstream C libraries were linked in a nondeterministic fashion
because they were collected through iter_crate_data() which is a nodeterministic
traversal of a hash map. When upstream rlibs had interdependencies among their
native libraries (such as libfoo depending on libc), then the ordering would
occasionally be wrong, causing linkage to fail.
This uses the topologically sorted list of libraries to collect native
libraries, so if a native library depends on libc it just needs to make sure
that the rust crate depends on liblibc.