Commit Graph

58514 Commits

Author SHA1 Message Date
Ariel Ben-Yehuda
76fb7d90ec remove StaticInliner and NaN checking
NaN checking was a lint for a deprecated feature. It can go away.
2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
37418b850f stop using MatchCheckCtxt to hold the param-env for check_match 2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
e313d8b290 change match checking to use HAIR
no intended functional changes
2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
04a92a1f56 un-break the construct_witness logic
Fixes #35609.
2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
abae5e7e25 split the exhaustiveness-checking logic to its own module
`check_match` is now left with its grab bag of random checks.
2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
48387c8bd9 refactor the pat_is_catchall logic 2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
732f22745d move hair::cx::pattern to const_eval 2016-10-26 22:41:17 +03:00
Ariel Ben-Yehuda
bb5afb4121 use a struct abstraction in check_match 2016-10-26 22:41:16 +03:00
Ariel Ben-Yehuda
b69cca6da4 remove SliceWithSubslice, only used from old trans 2016-10-26 22:41:16 +03:00
Ariel Ben-Yehuda
e5c01f4633 comment some ugly points in check_match 2016-10-26 22:41:16 +03:00
bors
3a25b65c1f Auto merge of #37315 - bluss:fold-more, r=alexcrichton
Implement Iterator::fold for .chain(), .cloned(), .map() and the VecDeque iterators.

Chain can do something interesting here where it passes on the fold
into its inner iterators.

The lets the underlying iterator's custom fold() be used, and skips the
regular chain logic in next.

Also implement .fold() specifically for .map() and .cloned() so that any
inner fold improvements are available through map and cloned.

The same way, a VecDeque iterator fold can be turned into two slice folds.

These changes lend the power of the slice iterator's loop codegen to
VecDeque, and to chains of slice iterators, and so on.
It's an improvement for .sum() and .product(), and other uses of fold.
2016-10-26 11:43:32 -07:00
Vadim Petrochenkov
811a2b91de Prohibit patterns in trait methods without bodies 2016-10-26 20:55:16 +03:00
bors
a5b6a9fa8a Auto merge of #37312 - arthurprs:sip-smaller, r=alexcrichton
Small improvement to SipHasher

Very small but constant improvement, the objective is to lower latency for u16, u32 and small strings.

CC #35735

```
➜  siphash-bench git:(master) ✗ sudo nice -n -20 target/release/foo-648738a54f390643 --bench | tee benches.txt
[sudo] password for arthurprs:

running 62 tests
test _same                       ... bench:           0 ns/iter (+/- 0)
test _warmup                     ... bench:           0 ns/iter (+/- 0)
test rust_siphash13::int_u16     ... bench:          12 ns/iter (+/- 1)
test rust_siphash13::int_u32     ... bench:          14 ns/iter (+/- 0)
test rust_siphash13::int_u64     ... bench:          11 ns/iter (+/- 1)
test rust_siphash13::int_u8      ... bench:          11 ns/iter (+/- 1)
test rust_siphash13::slice::_10  ... bench:          18 ns/iter (+/- 1)
test rust_siphash13::slice::_100 ... bench:          42 ns/iter (+/- 2)
test rust_siphash13::slice::_11  ... bench:          19 ns/iter (+/- 1)
test rust_siphash13::slice::_12  ... bench:          21 ns/iter (+/- 3)
test rust_siphash13::slice::_2   ... bench:          16 ns/iter (+/- 2)
test rust_siphash13::slice::_200 ... bench:          68 ns/iter (+/- 3)
test rust_siphash13::slice::_3   ... bench:          17 ns/iter (+/- 3)
test rust_siphash13::slice::_4   ... bench:          18 ns/iter (+/- 1)
test rust_siphash13::slice::_5   ... bench:          19 ns/iter (+/- 4)
test rust_siphash13::slice::_6   ... bench:          19 ns/iter (+/- 1)
test rust_siphash13::slice::_7   ... bench:          20 ns/iter (+/- 1)
test rust_siphash13::slice::_8   ... bench:          16 ns/iter (+/- 1)
test rust_siphash13::slice::_9   ... bench:          18 ns/iter (+/- 2)
test rust_siphash13::str_::_10   ... bench:          18 ns/iter (+/- 1)
test rust_siphash13::str_::_100  ... bench:          41 ns/iter (+/- 2)
test rust_siphash13::str_::_11   ... bench:          19 ns/iter (+/- 1)
test rust_siphash13::str_::_12   ... bench:          20 ns/iter (+/- 2)
test rust_siphash13::str_::_2    ... bench:          16 ns/iter (+/- 1)
test rust_siphash13::str_::_200  ... bench:          68 ns/iter (+/- 3)
test rust_siphash13::str_::_3    ... bench:          17 ns/iter (+/- 1)
test rust_siphash13::str_::_4    ... bench:          18 ns/iter (+/- 2)
test rust_siphash13::str_::_5    ... bench:          19 ns/iter (+/- 6)
test rust_siphash13::str_::_6    ... bench:          20 ns/iter (+/- 5)
test rust_siphash13::str_::_7    ... bench:          23 ns/iter (+/- 1)
test rust_siphash13::str_::_8    ... bench:          15 ns/iter (+/- 1)
test rust_siphash13::str_::_9    ... bench:          17 ns/iter (+/- 1)
test sip1b::int_u16              ... bench:          10 ns/iter (+/- 1)
test sip1b::int_u32              ... bench:           9 ns/iter (+/- 1)
test sip1b::int_u64              ... bench:          12 ns/iter (+/- 1)
test sip1b::int_u8               ... bench:           7 ns/iter (+/- 0)
test sip1b::slice::_10           ... bench:          12 ns/iter (+/- 1)
test sip1b::slice::_100          ... bench:          33 ns/iter (+/- 2)
test sip1b::slice::_11           ... bench:          13 ns/iter (+/- 0)
test sip1b::slice::_12           ... bench:          12 ns/iter (+/- 1)
test sip1b::slice::_2            ... bench:          10 ns/iter (+/- 0)
test sip1b::slice::_200          ... bench:          62 ns/iter (+/- 2)
test sip1b::slice::_3            ... bench:          10 ns/iter (+/- 1)
test sip1b::slice::_4            ... bench:           9 ns/iter (+/- 0)
test sip1b::slice::_5            ... bench:          10 ns/iter (+/- 1)
test sip1b::slice::_6            ... bench:          10 ns/iter (+/- 0)
test sip1b::slice::_7            ... bench:          11 ns/iter (+/- 0)
test sip1b::slice::_8            ... bench:          11 ns/iter (+/- 1)
test sip1b::slice::_9            ... bench:          12 ns/iter (+/- 1)
test sip1b::str_::_10            ... bench:          15 ns/iter (+/- 1)
test sip1b::str_::_100           ... bench:          37 ns/iter (+/- 3)
test sip1b::str_::_11            ... bench:          16 ns/iter (+/- 1)
test sip1b::str_::_12            ... bench:          14 ns/iter (+/- 1)
test sip1b::str_::_2             ... bench:          13 ns/iter (+/- 1)
test sip1b::str_::_200           ... bench:          67 ns/iter (+/- 5)
test sip1b::str_::_3             ... bench:          14 ns/iter (+/- 2)
test sip1b::str_::_4             ... bench:          12 ns/iter (+/- 1)
test sip1b::str_::_5             ... bench:          13 ns/iter (+/- 1)
test sip1b::str_::_6             ... bench:          13 ns/iter (+/- 0)
test sip1b::str_::_7             ... bench:          16 ns/iter (+/- 1)
test sip1b::str_::_8             ... bench:          14 ns/iter (+/- 1)
test sip1b::str_::_9             ... bench:          15 ns/iter (+/- 1)

test result: ok. 0 passed; 0 failed; 0 ignored; 62 measured

➜  siphash-bench git:(master) ✗ cargo benchcmp rust_siphash13:: sip1b:: benches.txt
 name         rust_siphash13:: ns/iter  sip1b:: ns/iter  diff ns/iter   diff %
 int_u16      12                        10                         -2  -16.67%
 int_u32      14                        9                          -5  -35.71%
 int_u64      11                        12                          1    9.09%
 int_u8       11                        7                          -4  -36.36%
 slice::_10   18                        12                         -6  -33.33%
 slice::_100  42                        33                         -9  -21.43%
 slice::_11   19                        13                         -6  -31.58%
 slice::_12   21                        12                         -9  -42.86%
 slice::_2    16                        10                         -6  -37.50%
 slice::_200  68                        62                         -6   -8.82%
 slice::_3    17                        10                         -7  -41.18%
 slice::_4    18                        9                          -9  -50.00%
 slice::_5    19                        10                         -9  -47.37%
 slice::_6    19                        10                         -9  -47.37%
 slice::_7    20                        11                         -9  -45.00%
 slice::_8    16                        11                         -5  -31.25%
 slice::_9    18                        12                         -6  -33.33%
 str_::_10    18                        15                         -3  -16.67%
 str_::_100   41                        37                         -4   -9.76%
 str_::_11    19                        16                         -3  -15.79%
 str_::_12    20                        14                         -6  -30.00%
 str_::_2     16                        13                         -3  -18.75%
 str_::_200   68                        67                         -1   -1.47%
 str_::_3     17                        14                         -3  -17.65%
 str_::_4     18                        12                         -6  -33.33%
 str_::_5     19                        13                         -6  -31.58%
 str_::_6     20                        13                         -7  -35.00%
 str_::_7     23                        16                         -7  -30.43%
 str_::_8     15                        14                         -1   -6.67%
 str_::_9     17                        15                         -2  -11.76%

```

from a modified hash-rs suite (preallocating maps and adding having slice/str variants)

graph version: http://imgur.com/a/DuoI4

```
➜  hash-rs git:(rfc-extend-hasher) ✗ cargo benchcmp sip13:: sip13opt:: benches.txt
 name                             sip13:: ns/iter      sip13opt:: ns/iter   diff ns/iter   diff %
 slice::mapcountdense_000000001   27,343 (36 MB/s)     26,401 (37 MB/s)             -942   -3.45%
 slice::mapcountdense_000000002   28,982 (69 MB/s)     26,807 (74 MB/s)           -2,175   -7.50%
 slice::mapcountdense_000000003   29,304 (102 MB/s)    27,360 (109 MB/s)          -1,944   -6.63%
 slice::mapcountdense_000000004   30,411 (131 MB/s)    25,888 (154 MB/s)          -4,523  -14.87%
 slice::mapcountdense_000000005   32,625 (153 MB/s)    27,486 (181 MB/s)          -5,139  -15.75%
 slice::mapcountdense_000000006   34,920 (171 MB/s)    27,204 (220 MB/s)          -7,716  -22.10%
 slice::mapcountdense_000000007   33,497 (208 MB/s)    28,330 (247 MB/s)          -5,167  -15.43%
 slice::mapcountdense_000000008   31,153 (256 MB/s)    28,617 (279 MB/s)          -2,536   -8.14%
 slice::mapcountdense_000000009   30,745 (292 MB/s)    29,666 (303 MB/s)          -1,079   -3.51%
 slice::mapcountdense_000000010   31,509 (317 MB/s)    29,804 (335 MB/s)          -1,705   -5.41%
 slice::mapcountdense_000000011   32,526 (338 MB/s)    30,520 (360 MB/s)          -2,006   -6.17%
 slice::mapcountdense_000000012   32,981 (363 MB/s)    28,739 (417 MB/s)          -4,242  -12.86%
 slice::mapcountdense_000000013   34,713 (374 MB/s)    30,348 (428 MB/s)          -4,365  -12.57%
 slice::mapcountdense_000000014   34,635 (404 MB/s)    29,974 (467 MB/s)          -4,661  -13.46%
 slice::mapcountdense_000000015   35,924 (417 MB/s)    30,584 (490 MB/s)          -5,340  -14.86%
 slice::mapcountdense_000000016   31,939 (500 MB/s)    30,564 (523 MB/s)          -1,375   -4.31%
 slice::mapcountdense_000000032   36,545 (875 MB/s)    34,833 (918 MB/s)          -1,712   -4.68%
 slice::mapcountdense_000000064   44,691 (1432 MB/s)   43,912 (1457 MB/s)           -779   -1.74%
 slice::mapcountdense_000000128   67,210 (1904 MB/s)   64,630 (1980 MB/s)         -2,580   -3.84%
 slice::mapcountdense_000000256   110,320 (2320 MB/s)  108,713 (2354 MB/s)        -1,607   -1.46%
 slice::mapcountsparse_000000001  29,686 (33 MB/s)     28,673 (34 MB/s)           -1,013   -3.41%
 slice::mapcountsparse_000000002  32,073 (62 MB/s)     30,519 (65 MB/s)           -1,554   -4.85%
 slice::mapcountsparse_000000003  33,184 (90 MB/s)     31,208 (96 MB/s)           -1,976   -5.95%
 slice::mapcountsparse_000000004  34,344 (116 MB/s)    30,242 (132 MB/s)          -4,102  -11.94%
 slice::mapcountsparse_000000005  34,536 (144 MB/s)    30,552 (163 MB/s)          -3,984  -11.54%
 slice::mapcountsparse_000000006  35,791 (167 MB/s)    30,813 (194 MB/s)          -4,978  -13.91%
 slice::mapcountsparse_000000007  36,773 (190 MB/s)    31,362 (223 MB/s)          -5,411  -14.71%
 slice::mapcountsparse_000000008  33,101 (241 MB/s)    32,399 (246 MB/s)            -702   -2.12%
 slice::mapcountsparse_000000009  34,025 (264 MB/s)    33,065 (272 MB/s)            -960   -2.82%
 slice::mapcountsparse_000000010  34,755 (287 MB/s)    33,152 (301 MB/s)          -1,603   -4.61%
 slice::mapcountsparse_000000011  35,682 (308 MB/s)    33,631 (327 MB/s)          -2,051   -5.75%
 slice::mapcountsparse_000000012  36,422 (329 MB/s)    32,604 (368 MB/s)          -3,818  -10.48%
 slice::mapcountsparse_000000013  37,561 (346 MB/s)    32,978 (394 MB/s)          -4,583  -12.20%
 slice::mapcountsparse_000000014  38,476 (363 MB/s)    33,376 (419 MB/s)          -5,100  -13.26%
 slice::mapcountsparse_000000015  39,202 (382 MB/s)    33,750 (444 MB/s)          -5,452  -13.91%
 slice::mapcountsparse_000000016  34,898 (458 MB/s)    33,621 (475 MB/s)          -1,277   -3.66%
 slice::mapcountsparse_000000032  39,767 (804 MB/s)    38,013 (841 MB/s)          -1,754   -4.41%
 slice::mapcountsparse_000000064  47,810 (1338 MB/s)   46,332 (1381 MB/s)         -1,478   -3.09%
 slice::mapcountsparse_000000128  64,519 (1983 MB/s)   63,322 (2021 MB/s)         -1,197   -1.86%
 slice::mapcountsparse_000000256  101,042 (2533 MB/s)  99,754 (2566 MB/s)         -1,288   -1.27%
 str_::mapcountdense_000000001    27,183 (36 MB/s)     24,007 (41 MB/s)           -3,176  -11.68%
 str_::mapcountdense_000000002    28,940 (69 MB/s)     24,574 (81 MB/s)           -4,366  -15.09%
 str_::mapcountdense_000000003    29,000 (103 MB/s)    24,687 (121 MB/s)          -4,313  -14.87%
 str_::mapcountdense_000000004    29,822 (134 MB/s)    24,377 (164 MB/s)          -5,445  -18.26%
 str_::mapcountdense_000000005    31,962 (156 MB/s)    25,184 (198 MB/s)          -6,778  -21.21%
 str_::mapcountdense_000000006    32,218 (186 MB/s)    25,020 (239 MB/s)          -7,198  -22.34%
 str_::mapcountdense_000000007    35,482 (197 MB/s)    27,705 (252 MB/s)          -7,777  -21.92%
 str_::mapcountdense_000000008    28,643 (279 MB/s)    25,563 (312 MB/s)          -3,080  -10.75%
 str_::mapcountdense_000000009    30,112 (298 MB/s)    26,773 (336 MB/s)          -3,339  -11.09%
 str_::mapcountdense_000000010    31,554 (316 MB/s)    27,607 (362 MB/s)          -3,947  -12.51%
 str_::mapcountdense_000000011    32,062 (343 MB/s)    27,770 (396 MB/s)          -4,292  -13.39%
 str_::mapcountdense_000000012    32,258 (372 MB/s)    25,612 (468 MB/s)          -6,646  -20.60%
 str_::mapcountdense_000000013    33,544 (387 MB/s)    26,908 (483 MB/s)          -6,636  -19.78%
 str_::mapcountdense_000000014    34,681 (403 MB/s)    27,267 (513 MB/s)          -7,414  -21.38%
 str_::mapcountdense_000000015    37,883 (395 MB/s)    30,226 (496 MB/s)          -7,657  -20.21%
 str_::mapcountdense_000000016    30,299 (528 MB/s)    27,960 (572 MB/s)          -2,339   -7.72%
 str_::mapcountdense_000000032    34,372 (930 MB/s)    32,736 (977 MB/s)          -1,636   -4.76%
 str_::mapcountdense_000000048    38,610 (1243 MB/s)   36,437 (1317 MB/s)         -2,173   -5.63%
 str_::mapcountdense_000000064    43,052 (1486 MB/s)   41,269 (1550 MB/s)         -1,783   -4.14%
 str_::mapcountdense_000000128    64,059 (1998 MB/s)   62,007 (2064 MB/s)         -2,052   -3.20%
 str_::mapcountdense_000000256    109,608 (2335 MB/s)  107,184 (2388 MB/s)        -2,424   -2.21%
 str_::mapcountsparse_000000001   29,155 (34 MB/s)     26,151 (38 MB/s)           -3,004  -10.30%
 str_::mapcountsparse_000000002   31,536 (63 MB/s)     27,787 (71 MB/s)           -3,749  -11.89%
 str_::mapcountsparse_000000003   32,524 (92 MB/s)     27,861 (107 MB/s)          -4,663  -14.34%
 str_::mapcountsparse_000000004   33,535 (119 MB/s)    27,585 (145 MB/s)          -5,950  -17.74%
 str_::mapcountsparse_000000005   34,239 (146 MB/s)    27,520 (181 MB/s)          -6,719  -19.62%
 str_::mapcountsparse_000000006   35,485 (169 MB/s)    27,437 (218 MB/s)          -8,048  -22.68%
 str_::mapcountsparse_000000007   39,098 (179 MB/s)    30,465 (229 MB/s)          -8,633  -22.08%
 str_::mapcountsparse_000000008   30,882 (259 MB/s)    29,215 (273 MB/s)          -1,667   -5.40%
 str_::mapcountsparse_000000009   33,375 (269 MB/s)    29,301 (307 MB/s)          -4,074  -12.21%
 str_::mapcountsparse_000000010   33,531 (298 MB/s)    29,008 (344 MB/s)          -4,523  -13.49%
 str_::mapcountsparse_000000011   34,607 (317 MB/s)    29,800 (369 MB/s)          -4,807  -13.89%
 str_::mapcountsparse_000000012   35,700 (336 MB/s)    28,380 (422 MB/s)          -7,320  -20.50%
 str_::mapcountsparse_000000013   36,692 (354 MB/s)    29,350 (442 MB/s)          -7,342  -20.01%
 str_::mapcountsparse_000000014   37,326 (375 MB/s)    29,285 (478 MB/s)          -8,041  -21.54%
 str_::mapcountsparse_000000015   41,098 (364 MB/s)    33,073 (453 MB/s)          -8,025  -19.53%
 str_::mapcountsparse_000000016   33,046 (484 MB/s)    30,717 (520 MB/s)          -2,329   -7.05%
 str_::mapcountsparse_000000032   37,471 (853 MB/s)    35,542 (900 MB/s)          -1,929   -5.15%
 str_::mapcountsparse_000000048   41,324 (1161 MB/s)   39,332 (1220 MB/s)         -1,992   -4.82%
 str_::mapcountsparse_000000064   45,858 (1395 MB/s)   43,802 (1461 MB/s)         -2,056   -4.48%
 str_::mapcountsparse_000000128   62,471 (2048 MB/s)   60,683 (2109 MB/s)         -1,788   -2.86%
 str_::mapcountsparse_000000256   101,283 (2527 MB/s)  97,655 (2621 MB/s)         -3,628   -3.58%
```
2016-10-26 08:15:07 -07:00
bors
a6b3b01b5f Auto merge of #37270 - Mark-Simulacrum:smallvec-optimized-arenas, r=eddyb
Add ArrayVec and AccumulateVec to reduce heap allocations during interning of slices

Updates `mk_tup`, `mk_type_list`, and `mk_substs` to allow interning directly from iterators. The previous PR, #37220, changed some of the calls to pass a borrowed slice from `Vec` instead of directly passing the iterator, and these changes further optimize that to avoid the allocation entirely.

This change yields 50% less malloc calls in [some cases](https://pastebin.mozilla.org/8921686). It also yields decent, though not amazing, performance improvements:
```
futures-rs-test  4.091s vs  4.021s --> 1.017x faster (variance: 1.004x, 1.004x)
helloworld       0.219s vs  0.220s --> 0.993x faster (variance: 1.010x, 1.018x)
html5ever-2016-  3.805s vs  3.736s --> 1.018x faster (variance: 1.003x, 1.009x)
hyper.0.5.0      4.609s vs  4.571s --> 1.008x faster (variance: 1.015x, 1.017x)
inflate-0.1.0    3.864s vs  3.883s --> 0.995x faster (variance: 1.232x, 1.005x)
issue-32062-equ  0.309s vs  0.299s --> 1.033x faster (variance: 1.014x, 1.003x)
issue-32278-big  1.614s vs  1.594s --> 1.013x faster (variance: 1.007x, 1.004x)
jld-day15-parse  1.390s vs  1.326s --> 1.049x faster (variance: 1.006x, 1.009x)
piston-image-0. 10.930s vs 10.675s --> 1.024x faster (variance: 1.006x, 1.010x)
reddit-stress    2.302s vs  2.261s --> 1.019x faster (variance: 1.010x, 1.026x)
regex.0.1.30     2.250s vs  2.240s --> 1.005x faster (variance: 1.087x, 1.011x)
rust-encoding-0  1.895s vs  1.887s --> 1.005x faster (variance: 1.005x, 1.018x)
syntex-0.42.2   29.045s vs 28.663s --> 1.013x faster (variance: 1.004x, 1.006x)
syntex-0.42.2-i 13.925s vs 13.868s --> 1.004x faster (variance: 1.022x, 1.007x)
```

We implement a small-size optimized vector, intended to be used primarily for collection of presumed to be short iterators. This vector cannot be "upsized/reallocated" into a heap-allocated vector, since that would require (slow) branching logic, but during the initial collection from an iterator heap-allocation is possible.

We make the new `AccumulateVec` and `ArrayVec` generic over implementors of the `Array` trait, of which there is currently one, `[T; 8]`. In the future, this is likely to expand to other values of N.

Huge thanks to @nnethercote for collecting the performance and other statistics mentioned above.
2016-10-26 03:47:55 -07:00
bors
586a988313 Auto merge of #36421 - TimNN:check-abis, r=alexcrichton
check target abi support

This PR checks for each extern function / block whether the ABI / calling convention used is supported by the current target.

This was achieved by adding an `abi_blacklist` field to the target specifications, listing the calling conventions unsupported for that target.
2016-10-25 21:49:59 -07:00
John Hodge
d68fb5f20a Fix typo, it bothered me 2016-10-26 11:14:46 +08:00
Mark-Simulacrum
989eba79a3 Add size hint to Result's FromIterator implementation. 2016-10-25 20:06:17 -06:00
Mark-Simulacrum
982a48575b Utilize AccumulateVec to avoid heap allocations in mk_{substs, type_list, tup} calls. 2016-10-25 20:06:17 -06:00
Mark-Simulacrum
a4f7ba376e Add AccumulateVec, a potentially stack-allocated vector.
AccumulateVec is generic over the Array trait, which is currently only
implemented for [T; 8].
2016-10-25 20:06:17 -06:00
bors
a7557e758d Auto merge of #37361 - jseyfried:fix_crate_var_regressions, r=nrc
Fix `$crate`-related regressions

Fixes #37345, fixes #37357, fixes #37352, and improves the `unused_extern_crates` lint.
r? @nrc
2016-10-25 17:54:13 -07:00
Jeffrey Seyfried
0d30325286 Avoid false positive unused_extern_crates. 2016-10-25 20:38:58 +00:00
Jeffrey Seyfried
04ca378b89 Support use $crate; with a future compatibility warning. 2016-10-25 20:26:00 +00:00
Jeffrey Seyfried
199ed20aa6 Fix $crate-related regressions. 2016-10-25 20:25:59 +00:00
Ulrik Sverdrup
a16626fc42 iter: Implement .fold() for .chain()
Chain can do something interesting here where it passes on the fold
into its inner iterators.

The lets the underlying iterator's custom fold() be used, and skips the
regular chain logic in next.
2016-10-25 22:06:39 +02:00
bors
aef18be1bc Auto merge of #37111 - TimNN:sized-enums, r=nikomatsakis
Disallow Unsized Enums

Fixes #16812.

This PR is a potential fix for #16812, an issue which is reported [again](https://github.com/rust-lang/rust/issues/36801) and [again](https://github.com/rust-lang/rust/issues/36975), with over a dozen duplicates by now.

This PR is mainly meant to promoted discussion about the issue and the correct way to fix it.

This is a [breaking-change] since the error is now reported during wfchecking, so that even the definition of a (potentially) unsized enum will cause an error (whereas it would previously cause an ICE at trans time if the enum was used in an unsized manner).
2016-10-25 12:37:43 -07:00
arthurprs
a319d13a9b Small improvement to SipHasher 2016-10-25 20:33:03 +02:00
Tim Neumann
1422ac9a8f adapt tests 2016-10-25 19:56:36 +02:00
Duncan
09227b17f4 Vec docs: fix broken links and make quoting consistent 2016-10-26 06:24:52 +13:00
Srinivas Reddy Thatiparthy
e820a866bc
run rustfmt on libcollectionstest 2016-10-25 21:59:22 +05:30
Srinivas Reddy Thatiparthy
892a05d694
run rustfmt on librustc_metadata folder 2016-10-25 21:53:11 +05:30
Eduard Burtescu
3fb24c18ab rustc_metadata: move is_extern_item to trans. 2016-10-25 18:18:17 +03:00
Taylor Cramer
2bd94188f7 Add identifier to unused import warnings 2016-10-25 08:16:40 -07:00
Zoffix Znet
22ce98d0e7 Fix typo 2016-10-25 10:03:55 -04:00
Peter Atashian
b3e8c4c2be
Print out the error when HeapFree failures do occur 2016-10-25 10:00:16 -04:00
Ulrik Sverdrup
780acda325 iter: Implement .fold() for .cloned() and .map()
Implement .fold() specifically for .map() and .cloned() so that any
inner fold improvements are available through map and cloned.
2016-10-25 15:50:52 +02:00
Ulrik Sverdrup
15a95866b4 Special case .fold() for VecDeque's iterators 2016-10-25 15:50:52 +02:00
bors
67f26f7e0c Auto merge of #37360 - jseyfried:fix_label_scope, r=nrc
resolve: fix label scopes

Fixes #37353 (turns an ICE back into an error).
r? @nrc
2016-10-25 06:20:02 -07:00
Liigo Zhuang
47515ad5bd rustdoc: mark unsafe fns with icons 2016-10-25 17:12:33 +08:00
bors
affc3b7552 Auto merge of #37292 - jseyfried:import_macros_in_resolve, r=nrc
Process `#[macro_use]` imports in `resolve` and clean up macro loading

Groundwork macro modularization (cc #35896).
r? @nrc
2016-10-24 23:15:59 -07:00
Nicholas Nethercote
c440a7ae65 Don't use Rc in TokenTreeOrTokenTreeVec.
This avoids 800,000 allocations when compiling html5ever.
2016-10-25 12:20:14 +11:00
Nicholas Nethercote
3fd90d8aa5 Use SmallVector for TtReader::stack.
This avoids 800,000 heap allocations when compiling html5ever. It
requires tweaking `SmallVector` a little.
2016-10-25 11:48:25 +11:00
Nicholas Nethercote
0a16a11c39 Use SmallVector for the stack in macro_parser::parse.
This avoids 800,000 heap allocations when compiling html5ever.
2016-10-25 11:48:20 +11:00
Taylor Cramer
ab6119a38f Fix coercin -> coercion typo 2016-10-24 17:33:41 -07:00
Taylor Cramer
4bb6d4e740 rustc_typeck: Allow reification from fn item to unsafe ptr 2016-10-24 17:05:58 -07:00
Raph Levien
c4651dba5f Support for aarch64 architecture on Fuchsia
This patch adds support for the aarch64-unknown-fuchsia target. Also
updates src/liblibc submodule to include required libc change.
2016-10-24 16:58:35 -07:00
Raph Levien
592d7bfb3a Add support for kernel randomness for Fuchsia
Wire up cprng syscall as provider for rand::os::OsRng on Fuchsia.
2016-10-24 16:48:45 -07:00
bors
7a208648da Auto merge of #37382 - jonathandturner:rollup, r=jonathandturner
Rollup of 7 pull requests

- Successful merges: #37228, #37304, #37324, #37328, #37336, #37349, #37372
- Failed merges:
2016-10-24 16:47:38 -07:00
Jonathan Turner
e948cf17bc Rollup merge of #37372 - vtduncan:pathbuf-docs-link, r=steveklabnik
Link to PathBuf from the Path docs

I got stuck trying to use `Path` when `PathBuf` was what I needed. Hopefully this makes `PathBuf` and the module docs a bit easier to find for others.

r? @steveklabnik
2016-10-24 15:41:29 -07:00
Jonathan Turner
59b7ea4c59 Rollup merge of #37349 - srinivasreddy:meta_1, r=nikomatsakis
rustfmt on metadata folder
2016-10-24 15:41:29 -07:00
Jonathan Turner
691ab948ce Rollup merge of #37336 - michaelwoerister:debuginfo-type-ids, r=eddyb
debuginfo: Use TypeIdHasher for generating global debuginfo type IDs.

The only requirement for debuginfo type IDs is that they are globally unique. The `TypeIdHasher` (which is used for `std::intrinsic::type_id()` provides that, so we can get rid of some redundancy by re-using it for debuginfo. Values produced by the `TypeIdHasher` are also more stable than the current `UniqueTypeId` generation algorithm produces -- these incorporate the `NodeId`s, which is not good for incremental compilation.

@alexcrichton @eddyb : Could you take a look at the endianess adaptations that I made to the `TypeIdHasher`?

Also, are we sure that a 64 bit hash is wide enough for something that is supposed to be globally unique? For debuginfo I'm using 160 bits to make sure that we don't run into conflicts there.
2016-10-24 15:41:29 -07:00