#[used] attribute
(For an explanation of what this feature does, read the commit message)
I'd like to propose landing this as an experimental feature (experimental as in:
no clear stabilization path -- like `asm!`, `#[linkage]`) as it's low
maintenance (I think) and relevant to the "Usage in resource-constrained
environments" exploration area.
The main use case I see is running code before `main`. This could be used, for
instance, to cheaply initialize an allocator before `main` where the alternative
is to use `lazy_static` to initialize the allocator on its first use which it's
more expensive (atomics) and doesn't work on ARM Cortex-M0 microcontrollers (no
`AtomicUsize` on that platform)
Here's a `std` example of that:
``` rust
unsafe extern "C" fn before_main_1() {
println!("Hello");
}
unsafe extern "C" fn before_main_2() {
println!("World");
}
#[link_section = ".init_arary"]
#[used]
static INIT_ARRAY: [unsafe extern "C" fn(); 2] = [before_main_1, before_main_2];
fn main() {
println!("Goodbye");
}
```
```
$ rustc -C lto -C opt-level=3 before_main.rs
$ ./before_main
Hello
World
Goodbye
```
In general, this pattern could be used to let *dependencies* run code before
`main` (which sounds like it could go very wrong in some cases). There are
probably other use cases; I hope that the people I have cc-ed can comment on
those.
Note that I'm personally unsure if the above pattern is something we want to
promote / allow and that's why I'm proposing this feature as experimental. If
this leads to more footguns than benefits then we can just axe the feature.
cc @nikomatsakis ^ I know you have some thoughts on having a process for
experimental features though I'm fine with writing an RFC before landing this.
- `dead_code` lint will have to be updated to special case `#[used]` symbols.
- Should we extend `#[used]` to work on non-generic functions?
cc rust-lang/rfcs#1002
cc rust-lang/rfcs#1459
cc @dpc @JinShil
Avoid type-checking addition and indexing twice.
Fixes#40610 by moving the common `check_expr_coercable_to_type` call before the error reporting logic for binops and removing the one from `check_str_addition`.
Fixes#40861 by removing an unnecessary `check_expr_coercable_to_type` call.
rustdoc: Use pulldown-cmark for Markdown HTML rendering
Instead of rendering all of the HTML in rustdoc this relies on
pulldown-cmark's `push_html` to do most of the work. A few iterator
adapters are used to make rustdoc specific modifications to the output.
This also fixes MarkdownHtml and link titles in plain_summary_line.
https://ollie27.github.io/rust_doc_test/ is the docs built with this change and #41111.
Part of #40912.
cc @GuillaumeGomez
r? @steveklabnik
Fix Markdown issues in the docs
* Since the switch to pulldown-cmark reference links need a blank line
before the URLs. (#40912)
* Reference link references are not case sensitive.
* Doc comments need to be indented uniformly otherwise rustdoc gets
confused.
don't try to blame tuple fields for immutability
Tuple fields don't have an `&T` in their declaration that can be changed
to `&mut T` - skip them..
Fixes#41104.
r? @nikomatsakis
Introduce HashStable trait and base ICH implementations on it.
This PR introduces the `HashStable` trait which marks that a type can be hashed in a way that is stable across multiple compilation sessions. The PR also moves HIR incr. comp. hashing over to implementations of this trait instead of doing this via a HIR visitor. It also provides many `HashStable` implementations that are not used yet (e.g. for MIR types) but soon will be used when we directly hash crate metadata for incr. comp.
I've only done superficial performance measurements but it looks like the new implementation is a bit faster than the current one (due, I suppose, to some bugs I fixed and some unnecessary inefficiencies I removed). Here is the time in seconds for the `compute_incremental_hashes_map` pass for various crates:
| | OLD | NEW |
|:---------------:|:-----:|:-----:|
| libcore | 0.507 | 0.409 |
| libsyntax | 0.320 | 0.260 |
| librustc | 0.730 | 0.611 |
| librustc_driver | 0.024 | 0.015 |
Some notes regarding the implementation:
* Most `HashStable` implementations are provided via the `impl_hash_stable_for!` macro (as suggested by @nikomatsakis). This works out quite well. A custom_derive would have been better but Macros 1.1 are not available in the compiler.
* The trait implementation take care to exhaustively destructure everything they hash so that fields added in the future don't fall through the cracks. This is a bit verbose but I think it's well worth the trouble since we've had quite a few issues with missing fields or visitor callbacks in this area in the past. Most of it is behind the macro anyway.
cc @rust-lang/compiler
r? @nikomatsakis
Instead of rendering all of the HTML in rustdoc this relies on
pulldown-cmark's `push_html` to do most of the work. A few iterator
adapters are used to make rustdoc specific modifications to the output.
This also fixes MarkdownHtml and link titles in plain_summary_line.
* Since the switch to pulldown-cmark reference links need a blank line
before the URLs.
* Reference link references are not case sensitive.
* Doc comments need to be indented uniformly otherwise rustdoc gets
confused.
std: Use `poll` instead of `select`
This gives us the benefit of supporting file descriptors over the limit that
select supports, which...
Closes#40894
Move libXtest into libX/tests
This change moves:
1. `libcoretest` into `libcore/tests`
2. `libcollectionstest` into `libcollections/tests`
This is a follow-up to #39561.
r? @alexcrichton
Handle symlinks in src/bootstrap/clean.rs (mostly) -- resolves#40860.
In response to #40860
The broken condition can be replicated with:
```shell
export MYARCH=x86_64-apple-darwin && mkdir -p build/$MYARCH/subdir &&
touch build/$MYARCH/subdir/file && ln -s build/$MYARCH/subdir/file
build/$MYARCH/subdir/symlink
```
`src/bootstrap/clean.rs` has a custom implementation of removing a tree
`fn rm_rf` that used `std::path::Path::{is_file, is_dir, exists}` while
recursively deleting directories and files. Unfortunately, `Path`'s
implementation of `is_file()` and `is_dir()` and `exists()` always
unconditionally follow symlinks, which is the exact opposite of standard
implementations of deleting file trees.
It appears that this custom implementation is being used to workaround a
behavior in Windows where the files often get marked as read-only, which
prevents us from simply using something nice and simple like
`std::fs::remove_dir_all`, which properly deletes links instead of
following them.
So it looks like the fix is to use `.symlink_metadata()` to figure out
whether tree items are files/symlinks/directories. The one corner case
this won't cover is if there is a broken symlink in the "root"
`build/$MYARCH` directory, because those initial entries are run through
`Path::canonicalize()`, which panics with broken symlinks. So lets just
never use symlinks in that one directory. :-)
Overhaul Bootstrap (x.py) Command-Line-Parsing & Help Output
While working on #40417, I got frustrated with the behavior of x.py and the bootstrap binary it wraps, so I decided to do something about it. This PR should improve documentation, make the command-line-parsing more flexible, and clean up some of the internals. No command that worked before should stop working. At least that's the theory. :-)
This should resolve at least #40920 and #38373.
Changes:
- No more manual args manipulation -- getopts used everywhere except the one place it's not possible. As a result, options can be in any position, now, even before the subcommand.
- The additional options for test, bench, and dist now appear in the help output.
- No more single-letter variable bindings used internally for large scopes.
- Don't output the time measurement when just invoking `x.py` or explicitly passing `-h` or `--help`
- Logic is now much more linear. We build strings up, and then print them.
- Refer to subcommands as subcommands everywhere (some places we were saying "command")
- Other minor stuff.
@alexcrichton This is my first PR. Do I need to do something specific to request reviewers or anything?
Emit proper lifetime start intrinsics for personality slots
We currently only emit a single call to the lifetime start intrinsic
for the personality slot alloca. This happens because we create that
call at the time that we create the alloca, instead of creating it each
time we start using it. Because LLVM usually removes the alloca before
the lifetime intrinsics are even considered, this didn't cause any
problems yet, but we should fix this anyway.
Make 'overlapping_inherent_impls' lint a hard error
This is ought to be implemented in PR #40728. Unfortunately, when I rebased the PR to resolve merge conflict, the "hard error" code disappeared. This PR complements the initial PR.
Now the following rust code gives the following error:
```rust
struct Foo;
impl Foo {
fn id() {}
}
impl Foo {
fn id() {}
}
fn main() {}
```
```
error[E0592]: duplicate definitions with name `id`
--> /home/topecongiro/test.rs:4:5
|
4 | fn id() {}
| ^^^^^^^^^^ duplicate definitions for `id`
...
8 | fn id() {}
| ---------- other definition for `id`
error: aborting due to previous error
```
Let .rev()'s find use the underlying rfind and vice versa
- Connect the plumbing in an obvious way from Rev's find → underlying rfind and vice versa
- A style change in the provided implementation for Iterator::rfind, using simple next_back when it is enough
mark build::cfg::start_new_block as inline(never)
LLVM has a bug - [PR32488](https://bugs.llvm.org//show_bug.cgi?id=32488) - where it fails to deduplicate allocas in some
circumstances. The function `start_new_block` has allocas totalling 1216
bytes, and when LLVM inlines several copies of that function into
the recursive function `expr::into`, that function's stack space usage
goes into tens of kiBs, causing stack overflows.
Mark `start_new_block` as inline(never) to keep it from being inlined,
getting stack usage under control.
Fixes#40493.
Fixes#40573.
r? @eddyb
Add ptr::offset_to
This PR adds a method to calculate the signed distance (in number of elements) between two pointers. The resulting value can then be passed to `offset` to get one pointer from the other. This is similar to pointer subtraction in C/C++.
There are 2 special cases:
- If the distance is not a multiple of the element size then the result is rounded towards zero. (in C/C++ this is UB)
- ZST return `None`, while normal types return `Some(isize)`. This forces the user to handle the ZST case in unsafe code. (C/C++ doesn't have ZSTs)
Reduce a table used for `Debug` impl of `str`.
This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most `u16` entries are common in its upper byte. Specifically:
- `SINGLETONS` now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared.
- `NORMAL` now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set).
The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones.
In my x86-64 macOS laptop with `rustc 1.17.0-nightly (0aeb9c129 2017-03-15)`, `-C opt-level=3 -C lto` gives the following:
* The old routine compiles to 2,102 bytes of data and 416 bytes of code.
* The new routine compiles to 1,197 bytes of data and 448 bytes of code.
Counting a number of all printable Unicode scalar values (128,003, if you wonder) by filtering `0..0x110000` with `std::char::from_u32` and `is_printable` took 50±7ms for both. This can be surprising as the new routine *has* to do more calculations; this is partly explained by the fact that a linear search of `SINGLETONS` has been replaced by *two* linear searches for upper and lower bytes, which greatly reduces the iteration count.
Simplify HashMap Bucket interface
> Simplify HashMap Bucket interface
>
> * Store capacity_mask instead of capacity
> * Move bucket index into RawBucket
> * Valid bucket index is now always within [0..table_capacity)
> * Simplify iterators by moving logic into RawBuckets
> * Clone RawTable using RawBucket
> * Make retain aware of the number of elements
The idea was to put idx in RawBucket instead of the other Bucket types and simplify next() and prev() as much as possible. The rest was a side-effect of that change, except maybe the last 2.
This change makes iteration and other next/prev() heavy operations noticeably faster. Clone is way faster.
```
➜ hashmap2 git:(adapt) ✗ cargo benchcmp pre:: adp:: bench.txt
name pre:: ns/iter adp:: ns/iter diff ns/iter diff %
clone_10_000 74,364 39,736 -34,628 -46.57%
grow_100_000 8,343,553 8,233,785 -109,768 -1.32%
grow_10_000 817,825 723,958 -93,867 -11.48%
grow_big_value_100_000 18,418,979 17,906,186 -512,793 -2.78%
grow_big_value_10_000 1,219,242 1,103,334 -115,908 -9.51%
insert_1000 74,546 58,343 -16,203 -21.74%
insert_100_000 6,743,770 6,238,017 -505,753 -7.50%
insert_10_000 798,079 719,123 -78,956 -9.89%
insert_1_000_000 275,215,605 266,975,875 -8,239,730 -2.99%
insert_int_bigvalue_10_000 1,517,387 1,419,838 -97,549 -6.43%
insert_str_10_000 316,179 278,896 -37,283 -11.79%
insert_string_10_000 770,927 747,449 -23,478 -3.05%
iter_keys_100_000 386,099 333,104 -52,995 -13.73%
iterate_100_000 387,320 355,707 -31,613 -8.16%
lookup_100_000 206,757 193,063 -13,694 -6.62%
lookup_100_000_unif 219,366 193,180 -26,186 -11.94%
lookup_1_000_000 206,456 205,716 -740 -0.36%
lookup_1_000_000_unif 659,934 629,659 -30,275 -4.59%
lru_sim 20,194,334 18,442,149 -1,752,185 -8.68%
merge_shuffle 1,168,044 1,063,055 -104,989 -8.99%
```
Note 2: I may have messed up porting the diff, let's see what CI says.