Commit Graph

59110 Commits

Author SHA1 Message Date
Niko Matsakis
0adb1b1b02 pacify the mercilous tidy 2016-12-02 14:18:09 -05:00
Niko Matsakis
6fe4bffb40 test for #37290 using lint 2016-12-01 13:59:07 -05:00
Niko Matsakis
fa22fc387a in region, treat current (and future) item-likes alike
The `visit_fn` code mutates its surrounding context.  Between *items*,
this was saved/restored, but between impl items it was not. This meant
that we wound up with `CallSiteScope` entries with two parents (or
more!).  As far as I can tell, this is harmless in actual type-checking,
since the regions you interact with are always from at most one of those
branches. But it can slow things down.

Before, the effect was limited, since it only applied to impl items
within an impl. After #37660, impl items are visisted all together at
the end, and hence this could create a very messed up
hierarchy. Isolating impl item properly solves both issues.

I cannot come up with a way to unit-test this; for posterity, however,
you can observe the messed up hierarchies with a test as simple as the
following, which would create a callsite scope with two parents both
before and after

```
struct Foo {
}

impl Foo {
    fn bar(&self) -> usize {
        22
    }

    fn baz(&self) -> usize {
        22
    }
}

fn main() { }
```

Fixes #37864.
2016-12-01 13:59:04 -05:00
bors
908dba0c94 Auto merge of #38048 - rkruppe:llvm-stringref-fixes, r=alexcrichton
[LLVM 4.0] Don't assume llvm::StringRef is null terminated

StringRefs have a length and their contents are not usually null-terminated. The solution is to either copy the string data (in `rustc_llvm::diagnostic`) or take the size into account (in LLVMRustPrintPasses).

I couldn't trigger a bug caused by this (apparently all the strings returned in practice are actually null-terminated) but this is more correct and more future-proof.

cc #37609
2016-12-01 15:21:11 +00:00
bors
149e76f12c Auto merge of #38018 - sourcefrog:doc, r=alexcrichton
Document that Process::command will search the PATH
2016-12-01 11:35:19 +00:00
bors
827eba4e70 Auto merge of #37911 - liigo:rustdoc-playground, r=alexcrichton
rustdoc: get back missing crate-name when --playground-url is used

follow up PR #37763
r? @alexcrichton (since you r+ed to #37763 )

----

Edit: When `#![doc(html_playground_url="")]` is used, the current crate name is saved to `PLAYGROUND`, so rustdoc may generate `extern crate NAME;` into code snips automatically. But when `--playground-url` was introduced in PR #37763, I forgot saving crate name to `PLAYGROUND`. This PR fix that.

----

Update:
- add test
- unstable `--playground-url`
2016-12-01 07:07:32 +00:00
bors
070fad1701 Auto merge of #37573 - ruuda:faster-cursor, r=alexcrichton
Add small-copy optimization for copy_from_slice

## Summary

During benchmarking, I found that one of my programs spent between 5 and 10 percent of the time doing memmoves. Ultimately I tracked these down to single-byte slices being copied with a memcopy. Doing a manual copy if the slice contains only one element can speed things up significantly. For my program, this reduced the running time by 20%.

## Background

I am optimizing a program that relies heavily on reading a single byte at a time. To avoid IO overhead, I read all data into a vector once, and then I use a `Cursor` around that vector to read from. During profiling, I noticed that `__memmove_avx_unaligned_erms` was hot, taking up 7.3% of the running time. It turns out that these were caused by calls to `Cursor::read()`, which calls `<&[u8] as Read>::read()`, which calls `&[T]::copy_from_slice()`, which calls `ptr::copy_nonoverlapping()`. This one is implemented as a memcopy. Copying a single byte with a memcopy is very wasteful, because (at least on my platform) it involves calling `memcpy` in libc. This is an indirect call when libc is linked dynamically, and furthermore `memcpy` is optimized for copying large amounts of data at the cost of a bit of overhead for small copies.

## Benchmarks

Before I made this change, `perf` reported the following for my program. I only included the relevant functions, and how they rank. (This is on a different machine than where I ran the original benchmarks. It has an older CPU, so `__memmove_sse2_unaligned_erms` is called instead of `__memmove_avx_unaligned_erms`.)

```
#3   5.47%  bench_decode  libc-2.24.so      [.] __memmove_sse2_unaligned_erms
#5   1.67%  bench_decode  libc-2.24.so      [.] memcpy@GLIBC_2.2.5
#6   1.51%  bench_decode  bench_decode      [.] memcpy@plt
```

`memcpy` is eating up 8.65% of the total running time, and the overhead of dispatching to a specialized fast copy function (`memcpy@GLIBC` showing up) is clearly visible. The price of dynamic linking (`memcpy@plt` showing up) is visible too.

After this change, this is what `perf` reports:

```
#5   0.33%  bench_decode  libc-2.24.so      [.] __memmove_sse2_unaligned_erms
#14  0.01%  bench_decode  libc-2.24.so      [.] memcpy@GLIBC_2.2.5
```

Now only 0.34% of the running time is spent on memcopies. The dynamic linking overhead is not significant at all any more.

To add some more data, my program generates timing results for the operation in its main loop. These are the timings before and after the change:

| Time before   | Time after    | After/Before |
|---------------|---------------|--------------|
| 29.8 ± 0.8 ns | 23.6 ± 0.5 ns |  0.79 ± 0.03 |

The time is basically the total running time divided by a constant; the actual numbers are not important. This change reduced the total running time by 21% (much more than the original 9% spent on memmoves, likely because the CPU is stalling a lot less because data dependencies are more transparent). Of course YMMV and for most programs this will not matter at all. But when it does, the gains can be significant!

## Alternatives

* At first I implemented this in `io::Cursor`. I moved it to `&[T]::copy_from_slice()` instead, but this might be too intrusive, especially because it applies to all `T`, not just `u8`. To restrict this to `io::Read`, `<&[u8] as Read>::read()` is probably the best place.
* I tried copying bytes in a loop up to 64 or 8 bytes before calling `Read::read`, but both resulted in about a 20% slowdown instead of speedup.
2016-12-01 02:52:09 +00:00
Martin Pool
db93677360 Document that Process::command will search the PATH 2016-11-30 17:10:32 -08:00
bors
dc81742b18 Auto merge of #38047 - canndrew:fmt-void-non-empty, r=bluss
Make core::fmt::Void a non-empty type.

Adding back this change that was removed from PR #36449 because it's a fix and because I immediately hit a problem with it again when I started implementing my fix for #12609.
2016-11-30 23:40:10 +00:00
bors
ecff71a45c Auto merge of #37800 - alexcrichton:new-bootstrap, r=eddyb
Update the bootstrap compiler

Now that we've got a beta build, let's use it!
2016-11-30 19:17:24 +00:00
Alex Crichton
2186660b51 Update the bootstrap compiler
Now that we've got a beta build, let's use it!
2016-11-30 10:38:08 -08:00
bors
5a0248068c Auto merge of #38014 - jseyfried:refactor_path_resolution, r=nrc
resolve: refactor path resolution

This is a pure refactoring, modulo minor diagnostics improvements.
r? @nrc
2016-11-30 16:02:18 +00:00
bors
5db4826410 Auto merge of #37989 - nrc:save-mod, r=nikomatsakis
save-analysis: redirect a module decl to the start of the defining file
2016-11-30 12:50:09 +00:00
Ruud van Asseldonk
3be2c3b309 Move small-copy optimization into <&[u8] as Read>
Based on the discussion in https://github.com/rust-lang/rust/pull/37573,
it is likely better to keep this limited to std::io, instead of
modifying a function which users expect to be a memcpy.
2016-11-30 11:09:29 +01:00
Ruud van Asseldonk
341805288e Move small-copy optimization into copy_from_slice
Ultimately copy_from_slice is being a bottleneck, not io::Cursor::read.
It might be worthwhile to move the check here, so more places can
benefit from it.
2016-11-30 11:09:29 +01:00
Ruud van Asseldonk
cd7fade0a9 Add small-copy optimization for io::Cursor
During benchmarking, I found that one of my programs spent between 5 and
10 percent of the time doing memmoves. Ultimately I tracked these down
to single-byte slices being copied with a memcopy in io::Cursor::read().
Doing a manual copy if only one byte is requested can speed things up
significantly. For my program, this reduced the running time by 20%.

Why special-case only a single byte, and not a "small" slice in general?
I tried doing this for slices of at most 64 bytes and of at most 8
bytes. In both cases my test program was significantly slower.
2016-11-30 11:09:29 +01:00
bors
3abaf43f77 Auto merge of #37954 - eddyb:rustdoc-2, r=alexcrichton
rustdoc: link to cross-crate sources directly.

Fixes #37684 by implementing proper support for getting the `Span` of definitions across crates.
In rustdoc this is used to generate direct links to the original source instead of fragile redirects.

This functionality could be expanded further for making error reporting code more uniform and seamless across crates, although at the moment there is no actual source to print, only file/line/column information.

Closes #37870 which is also "fixes" #37684 by throwing away the builtin macro docs from libcore.
After this lands, #37727 could be reverted, although it doesn't matter much either way.
2016-11-30 07:46:00 +00:00
Eduard-Mihai Burtescu
900191891f rustdoc: link to cross-crate sources directly. 2016-11-30 04:48:56 +02:00
Eduard-Mihai Burtescu
177913b49c rustc: track the Span's of definitions across crates. 2016-11-30 04:48:56 +02:00
bors
8e373b4787 Auto merge of #37965 - Mark-Simulacrum:trait-obj-to-exis-predicate, r=eddyb
Refactor TraitObject to Slice<ExistentialPredicate>

For reference, the primary types changes in this PR are shown below. They may add in the understanding of what is discussed below, though they should not be required.

We change `TraitObject` into a list of `ExistentialPredicate`s to allow for a couple of things:
 - Principal (ExistentialPredicate::Trait) is now optional.
 - Region bounds are moved out of `TraitObject` into `TyDynamic`. This permits wrapping only the `ExistentialPredicate` list in `Binder`.
 - `BuiltinBounds` and `BuiltinBound` are removed entirely from the codebase, to permit future non-constrained auto traits. These are replaced with `ExistentialPredicate::AutoTrait`, which only requires a `DefId`. For the time being, only `Send` and `Sync` are supported; this constraint can be lifted in a future pull request.
 - Binder-related logic is extracted from `ExistentialPredicate` into the parent (`Binder<Slice<EP>>`), so `PolyX`s are inside `TraitObject` are replaced with `X`.

The code requires a sorting order for `ExistentialPredicate`s in the interned `Slice`. The sort order is asserted to be correct during interning, but the slices are not sorted at that point.

1. `ExistentialPredicate::Trait` are defined as always equal; **This may be wrong; should we be comparing them and sorting them in some way?**
1. `ExistentialPredicate::Projection`: Compared by `ExistentialProjection::sort_key`.
1. `ExistentialPredicate::AutoTrait`: Compared by `TraitDef.def_path_hash`.

Construction of `ExistentialPredicate`s is conducted through `TyCtxt::mk_existential_predicates`, which interns a passed iterator as a `Slice`. There are no convenience functions to construct from a set of separate iterators; callers must pass an iterator chain. The lack of convenience functions is primarily due to few uses and the relative difficulty in defining a nice API due to optional parts and difficulty in recognizing which argument goes where. It is also true that the current situation isn't significantly better than 4 arguments to a constructor function; but the extra work is deemed unnecessary as of this time.

```rust
// before this PR
struct TraitObject<'tcx> {
    pub principal: PolyExistentialTraitRef<'tcx>,
    pub region_bound: &'tcx ty::Region,
    pub builtin_bounds: BuiltinBounds,
    pub projection_bounds: Vec<PolyExistentialProjection<'tcx>>,
}

// after
pub enum ExistentialPredicate<'tcx> {
    // e.g. Iterator
    Trait(ExistentialTraitRef<'tcx>),
    // e.g. Iterator::Item = T
    Projection(ExistentialProjection<'tcx>),
    // e.g. Send
    AutoTrait(DefId),
}
```
2016-11-29 20:41:38 -06:00
Liigo Zhuang
d5785a368e rustdoc: fix up --playground-url 2016-11-30 10:33:23 +08:00
Liigo Zhuang
943bf96300 unstable --playground-url, add test code 2016-11-30 10:33:22 +08:00
Liigo Zhuang
c1a6f17031 rustdoc: get back missing crate-name when --playground-url is used
follow up PR #37763
2016-11-30 10:33:22 +08:00
bors
fa0005f2d5 Auto merge of #37863 - mikhail-m1:mut_error, r=nikomatsakis
add hint to fix error for immutable ref in arg

fix  #36412 part of #35233
r? @jonathandturner
2016-11-29 17:27:00 -06:00
bors
b30022a1d3 Auto merge of #37369 - estebank:multiline-span, r=nikomatsakis
Show multiline spans in full if short enough

When dealing with multiline spans that span few lines, show the complete span instead of restricting to the first character of the first line.

For example, instead of:

```
% ./rustc file2.rs
error[E0277]: the trait bound `{integer}: std::ops::Add<()>` is not satisfied
  --> file2.rs:13:9
   |
13 |    foo(1 + bar(x,
   |        ^ trait `{integer}: std::ops::Add<()>` not satisfied
   |
```

show

```
% ./rustc file2.rs
error[E0277]: the trait bound `{integer}: std::ops::Add<()>` is not satisfied
  --> file2.rs:13:9
   |
13 |      foo(1 + bar(x,
   |  ________^ starting here...
14 | |            y),
   | |_____________^ ...ending here: trait `{integer}: std::ops::Add<()>` not satisfied
   |
```

The [proposal in internals](https://internals.rust-lang.org/t/proposal-for-multiline-span-comments/4242/6) outlines the reasoning behind this.
2016-11-29 12:53:47 -06:00
bors
f50dbd580f Auto merge of #37918 - flodiebold:separate-bodies, r=nikomatsakis
Separate function bodies from their signatures in HIR

Also give them their own dep map node.

I'm still unhappy with the handling of inlined items (1452edc1), but maybe you have a suggestion how to improve it.

Fixes #35078.

r? @nikomatsakis
2016-11-29 08:50:38 -06:00
Florian Diebold
593b273659 librustdoc: Fix compilation after visitor change 2016-11-29 13:18:02 +01:00
Niko Matsakis
9457497bcc update comments 2016-11-29 13:04:27 +01:00
Niko Matsakis
104125d5f7 revamp Visitor with a single method for controlling nested visits 2016-11-29 13:04:27 +01:00
Florian Diebold
8575184b39 Fix rebase breakage 2016-11-29 13:04:27 +01:00
Florian Diebold
bf298aebfd Fix doc test collection 2016-11-29 13:04:27 +01:00
Florian Diebold
b10bbde335 Fix SVH tests some more 2016-11-29 13:04:27 +01:00
Florian Diebold
f0ce5bb66b Split nested_visit_mode function off from nested_visit_map
... and make the latter mandatory to implement.
2016-11-29 13:04:27 +01:00
Florian Diebold
725cffb1d5 Address remaining review comments 2016-11-29 13:04:27 +01:00
Florian Diebold
d5a501d312 Fix remaining SVH tests 2016-11-29 13:04:27 +01:00
Florian Diebold
d0ae2c8142 Refactor inlined items some more
They don't implement FnLikeNode anymore, instead are handled differently
further up in the call tree. Also, keep less information (just def ids
for the args).
2016-11-29 13:04:27 +01:00
Niko Matsakis
dd1491cfbe WIP: update tests to pass -- not complete 2016-11-29 13:04:27 +01:00
Niko Matsakis
688946d671 restructure CollectItem dep-node to separate fn sigs from bodies
Setup two tasks, one of which only processes the signatures, in order to
isolate the typeck entries for signatures from those for bodies.

Fixes #36078
Fixes #37720
2016-11-29 13:04:27 +01:00
Florian Diebold
f75c8a98dd Add make tidy fixes 2016-11-29 13:04:27 +01:00
Florian Diebold
23a8c7d4d9 Remove unused import 2016-11-29 13:04:27 +01:00
Florian Diebold
8d5ca62dcd Fix some comments 2016-11-29 13:04:27 +01:00
Florian Diebold
78b54c07e5 Make hello_world test work again
This used to work with the rustc_clean attribute, but doesn't anymore
since my rebase; but I don't know enough about the type checking to find
out what's wrong. The dep graph looks like this:

ItemSignature(xxxx) -> CollectItem(xxxx)
CollectItem(xxxx) -> ItemSignature(xxxx)
ItemSignature(xxxx) -> TypeckItemBody(yyyy)
HirBody(xxxx) -> CollectItem(xxxx)

The cycle between CollectItem and ItemSignature looks wrong, and my
guess is the CollectItem -> ItemSignature edge shouldn't be there, but
I'm not sure how to prevent it.
2016-11-29 13:04:27 +01:00
Florian Diebold
7b021298d9 Fix new tests 2016-11-29 13:04:27 +01:00
Florian Diebold
fb968d225a rustc_typeck: Make CollectItemTypesVisitor descend into bodies as well 2016-11-29 13:04:27 +01:00
Florian Diebold
c91037b964 Fix cross-crate associated constant evaluation 2016-11-29 13:04:27 +01:00
Florian Diebold
936dbbce37 Give function bodies their own dep graph node 2016-11-29 13:04:27 +01:00
Florian Diebold
16eedd2a78 Save bodies of functions for inlining into other crates
This is quite hacky and I hope to refactor it a bit, but at least it
seems to work.
2016-11-29 13:04:27 +01:00
Florian Diebold
1ac338c2a7 rustc_driver: fix compilation 2016-11-29 13:04:27 +01:00
Florian Diebold
0389cc6bcd rustc_passes: fix compilation 2016-11-29 13:04:27 +01:00
Florian Diebold
37e75411dd rustc_plugin: fix compilation 2016-11-29 13:04:27 +01:00