Allow rustc data structures compile to android
flock structure is defined in asm*/fcntl.h. This file on android is
generated from the linux kernel source, so they are the same.
Explicitly mention that `Vec::reserve` is based on len not capacity
I spent a good chunk of time tracking down a buffer overrun bug that
resulted from me mistakenly thinking that `reserve` was based on the
current capacity not the current length. It would be helpful if this
were called out explicitly in the docs.
remove wrong packed struct test
This UB was found by running the test under [Miri](https://github.com/solson/miri) which rejects these unsafe unaligned loads. 😄
Actually fix manifest generation
The previous fix contained an error where `toml::encode` returned a runtime
error, so this version just constructs a literal `toml::Value`.
Don't include directory names in shasums
Right now we just run `shasum` on an absolute path but right now the shasum
files only include filenames, so let's use `current_dir` and just the file name
to only have the file name emitted.
Fix a misleading statement in `Iterator.nth()`
The `Iterator.nth()` documentation says "Note that all preceding elements will be consumed". I assumed from that that the preceding elements would be the *only* ones that were consumed, but in fact the returned element is consumed as well.
The way I read the documentation, I assumed that `nth(0)` would not discard anything (there are 0 preceding elements, and maybe it just peeks at the start of the iterator somehow), so I added a sentence clarifying that it does. I also rephrased it to avoid the stunted "i.e." phrasing.
Specialize `PartialOrd<A> for [A] where A: Ord`
This way we can call `cmp` instead of `partial_cmp` in the loop, removing some burden of optimizing `Option`s away from the compiler.
PR #39538 introduced a regression where sorting slices suddenly became slower, since `slice1.lt(slice2)` was much slower than `slice1.cmp(slice2) == Less`. This problem is now fixed.
To verify, I benchmarked this simple program:
```rust
fn main() {
let mut v = (0..2_000_000).map(|x| x * x * x * 18913515181).map(|x| vec![x, x ^ 3137831591]).collect::<Vec<_>>();
v.sort();
}
```
Before this PR, it would take 0.95 sec, and now it takes 0.58 sec.
I also tried changing the `is_less` lambda to use `cmp` and `partial_cmp`. Now all three versions (`lt`, `cmp`, `partial_cmp`) are equally performant for sorting slices - all of them take 0.58 sec on the
benchmark.
Tangentially, as soon as we get `default impl`, it might be a good idea to implement a blanket default impl for `lt`, `gt`, `le`, `ge` in terms of `cmp` whenever possible. Today, those four functions by default are only implemented in terms of `partial_cmp`.
r? @alexcrichton
Add Emscripten-specific linker
Emscripten claims to accept most GNU linker options, but in fact most of `-Wl,...` are useless for it and instead it requires some additional special options which are easier to handle in a separate trait.
Currently added:
- `export_symbols`: works on executables as special Emscripten case since staticlibs/dylibs aren't compiled to JS, while exports are required to be accessible from JS.
Fixes#39171.
- `optimize` - translates Rust's optimization level to Emscripten optimization level (whether passed via `-C opt-level=...` or `-O...`).
Fixes#36899.
- `debuginfo` - translates debug info; Emscripten has 5 debug levels while Rust has 3, so chose to translate `-C debuginfo=1` to `-g3` (preserves whitespace, variable and function names for easy debugging).
Fixes#36901.
- `no_default_libraries` - tells Emscripten to exclude `memcpy` and co.
TODO (in future PR): dynamic linking via `SIDE_MODULE` / `MAIN_MODULE` mechanism.
Conversions between slices and boxes
This allows conversion for `Copy` slices, `str`, and `CStr` into their boxed counterparts.
This also adds the method `CString::into_boxed_c_str`.
I would like to add similar implementations for `OsStr` as well, but I have not figured out how.
Previously it used to build a switch in a way that didn’t preserve the invariat of SwitchInt. Now
it builds it in an optimal way too, where otherwise branch becomes all the branches which did not
have partial variant drops.
This removes another special case of Switch by replacing it with the more general SwitchInt. While
this is more clunky currently, there’s no reason we can’t make it nice (and efficient) to use.
Previously AdtDef variants contained ConstInt for each discriminant, which did not really reflect
the actual type of the discriminants. Moving the type into AdtDef allows to easily put the type
into metadata and also saves bytes from ConstVal overhead for each discriminant.
Also arguably the code is cleaner now :)
It claims to accept most GNU linker options, but in fact most of them
have no effect and instead it requires some special options which are
easier to handle in a separate trait.
Currently added:
- `export_symbols`: works on executables as special Emscripten case
since staticlibs/dylibs aren't compiled to JS, while exports are
required to be accessible from JS.
Fixes#39171.
- `optimize` - translates Rust's optimization level to Emscripten
optimization level (whether passed via `-C opt-level=...` or `-O...`).
Fixes#36899.
- `debuginfo` - translates debug info; Emscripten has 5 debug levels
while Rust has 3, so chose to translate `-C debuginfo=1` to `-g3`
(preserves whitespace, variable and function names for easy debugging).
Fixes#36901.
- `no_default_libraries` - tells Emscripten to exlude `memcpy` and co.
The `Iterator.nth()` documentation says "Note that all preceding elements will be consumed". I assumed from that that the preceding elements would be the *only* ones that were consumed, but in fact the returned element is consumed as well.
The way I read the documentation, I assumed that `nth(0)` would not discard anything (as there are 0 preceding elements), so I added a sentence clarifying that it does. I also rephrased it to avoid the stunted "i.e." phrasing.