Almost all operations on Path are based on the components iterator in one form
or another to handle equivalent paths. The `Hash` implementations, however,
mistakenly just went straight to the underlying `OsStr`, causing these
equivalent paths to not get merged together.
This commit updates the `Hash` implementation to also be based on the iterator
which should ensure that if two paths are equal they hash to the same thing.
cc #29008, but doesn't close it
`Rc::try_unwrap` and `Rc::make_mut` are stable since 1.4.0, but the example code still has `#![feature(rc_unique)]`. Ideally the stable and beta docs would be updated, but I don't think that's possible...
I read this section a few times before even having a guess what
was meant, then consulted IRC for confirmation. It may be that I
was thick-headed, but I think this is a useful addition.
I read this section a few times before even having a guess what
was meant, then consulted IRC for confirmation. It may be that I
was thick-headed, but I think this is a useful addition.
Note: for now, this change only affects `-windows-gnu` builds.
So why was this `libgcc` dylib dependency needed in the first place?
The stack unwinder needs to know about locations of unwind tables of all the modules loaded in the current process. The easiest portable way of achieving this is to have each module register itself with the unwinder when loaded into the process. All modules compiled by GCC do this by calling the __register_frame_info() in their startup code (that's `crtbegin.o` and `crtend.o`, which are automatically linked into any gcc output).
Another important piece is that there should be only one copy of the unwinder (and thus unwind tables registry) in the process. This pretty much means that the unwinder must be in a shared library (unless everything is statically linked).
Now, Rust compiler tries very hard to make sure that any given Rust crate appears in the final output just once. So if we link the unwinder statically to one of Rust's crates, everything should be fine.
Unfortunately, GCC startup objects are built under assumption that `libgcc` is the one true place for the unwind info registry, so I couldn't find any better way than to replace them. So out go `crtbegin`/`crtend`, in come `rsbegin`/`rsend`!
A side benefit of this change is that rustc is now more in control of the command line that goes to the linker, so we could stop using `gcc` as the linker driver and just invoke `ld` directly.
Motivation:
- It is not actually a pattern
- It is not actually needed, except for...
Drawback:
- Slice patterns like `[a, _.., b]` are pretty-printed as `[a, .., b]`. Great loss :(
plugin-[breaking-change], as always
Similarly to the simd intrinsics. I believe this is a better solution than #29288, and I could implement it as well for overflowing_add/sub/mul. Also rename from udiv/sdiv to div, and same for rem.
Currently if a print happens while a thread is being torn down it may cause a
panic if the LOCAL_STDOUT TLS slot has been destroyed by that point. This adds a
guard to check and prints to the process stdout if that's the case (as we do for
if the slot is already borrowed).
Closes#29488
These two commits do a few things:
1. reformat to 80 cols
2. use the reference-style links where appropriate for improved in-source readability
3. adds a few links, tweaks a couple of words, `3` -> `three`, stuff like that
While the diff is big due to these edits, there's no significant content change.
r? @brson
Before this patch `reserve` function allocated twice as requested
amount elements (not twice as capacity). It leaded to unnecessary
excessive memory usage in scenarios like this:
```
let mut v = Vec::new();
v.push(17);
v.extend(0..10);
println!("{}", v.capacity());
```
`Vec` allocated 22 elements, while it could allocate just 11.
`reserve` function must have a property of keeping `push` operation
cost (which calls `reserve`) `O(1)`. To achieve this `reserve` must
exponentialy grow its capacity when it does reallocation.
There's better strategy to implement `reserve`:
```
let new_capacity = max(current_capacity * 2, requested_capacity);
```
This strategy still guarantees that capacity grows at `O(1)` with
`reserve`, and fixes the issue with `extend`.
Patch imlpements this strategy.
Before this patch `reserve` function allocated twice as requested
amount elements (not twice as capacity). It leaded to unnecessary
excessive memory usage in scenarios like this:
```
let mut v = Vec::new();
v.push(17);
v.extend(0..10);
println!("{}", v.capacity());
```
`Vec` allocated 22 elements, while it could allocate just 11.
`reserve` function must have a property of keeping `push` operation
cost (which calls `reserve`) `O(1)`. To achieve this `reserve` must
exponentialy grow its capacity when it does reallocation.
There's better strategy to implement `reserve`:
```
let new_capacity = max(current_capacity * 2, requested_capacity);
```
This strategy still guarantees that capacity grows at `O(1)` with
`reserve`, and fixes the issue with `extend`.
Patch imlpements this strategy.