This was causing lots of ICEs in cargo. I sadly wasn't ever able to reduce the
test case down, but I presume that's because it has to do with node id
collisions which are pretty difficult to turn up...
In C, `ctime(t)` is equivalent to `asctime(localtime(t))`, so the result
should depend on the local timezone. Current `ctime` is compatible with
`asctime` in C, not `ctime`.
This commit renames `ctime` to `asctime` and adds `ctime` which converts
the time to the local timezone before formatting it.
This commit also fixes the documentation of them. Current documentation
of `ctime` says it returns "a string of the current time." However, it
actually returns a string of the time represented as `self`, not the
time when it is called.
Signed-off-by: OGINO Masanori <masanori.ogino@gmail.com>
One of the examples in the docs on adding documentation to rust code has an error that will cause the function to run endlessly rather than return the desired result, should someone actually implement this for some reason. While the error does not hinder the explanation of documenting code, it does look better if it is corrected.
So far, type names generated for debuginfo where a bit sketchy. It was not clearly defined when a name should be fully qualified and when not, if region parameters should be shown or not, and other things like that.
This commit makes the debuginfo module responsible for creating type names instead of using `ppaux::ty_to_str()` and brings type names (as they show up in the DWARF information) in line with GCC and Clang:
* The name of the type being described is unqualified. It's path is defined by its position in the namespace hierarchy.
* Type arguments are always fully qualified, no matter if they would actually be in scope at the type definition location.
Care is also taken to make type names consistent across crate boundaries. That is, the code now tries make the type name the same, regardless if the type is in the local crate or reconstructed from metadata. Otherwise LLVM will complain about violating the one-definition-rule when using link-time-optimization.
This commit also removes all source location information from type descriptions because these cannot be reconstructed for types instantiated from metadata. Again, with LTO enabled, this can lead to two versions of the debuginfo type description, one with and one without source location information, which then triggers the LLVM ODR assertion.
Fortunately, source location information about types is rarely used, so this has little impact. Once source location information is preserved in metadata (#1972) it can also be re-enabled for type descriptions.
`RUSTFLAGS=-g make check` no works again for me locally, including the LTO test cases (note that I've taken care of #15156 by reverting the change in LLVM that @luqmana identified as the culprit for that issue).
I believe there's more commonality to be found there but maybe small steps are better. I'm also in the process of documenting what I can, I will see if I can add it to this PR.
It also seems to me that ideally some of this stuff (especially the pattern sanity check) could live as a separate compiler-agnostic module but I understand this may not be the right time (if not the worst) to start the process of modularising rustc.
The current implementation of `rotate_left` and `rotate_right` are incorrect when the rotation amount is 0, or a multiple of the input's bitsize. When `n = 0`, the expression
(self >> n) | (self << ($BITS - n))
results in a shift left by `$BITS` bits, which is undefined behavior (see https://github.com/rust-lang/rust/issues/10183), and currently results in a hardcoded `-1` value, instead of the original input value. Reducing `($BITS - n)` modulo `$BITS`, simplified to `(-n % $BITS)`, fixes this problem.
Short-term fix per @brson's comment: https://github.com/rust-lang/rust/issues/13810#issuecomment-43562843. Tested on Win7 x64 and Linux.
One possible issue is that `install.sh` doesn't have a `need_cmd` definition like `configure` does. Should this be ported over as well?
Platform-detection code from `configure` copied over to `install.sh` in
order to special case the lib dir being `bin` on Windows instead of
`lib`.
Short-term fix for #13810.
with the corresponding trait parameter bounds.
This is a version of the patch in PR #12611 by Florian Hahn, modified to
address Niko's feedback.
It does not address the issue of duplicate type parameter bounds, nor
does it address the issue of implementation-defined methods that contain
*fewer* bounds than the trait, because Niko's review indicates that this
should not be necessary (and indeed I believe it is not). A test has
been added to ensure that this works.
This will break code like:
trait Foo {
fn bar<T:Baz>();
}
impl Foo for Boo {
fn bar<T:Baz + Quux>() { ... }
// ^~~~ ERROR
}
This will be rejected because the implementation requires *more* bounds
than the trait. It can be fixed by either adding the missing bound to
the trait:
trait Foo {
fn bar<T:Baz + Quux>();
// ^~~~
}
impl Foo for Boo {
fn bar<T:Baz + Quux>() { ... } // OK
}
Or by removing the bound from the impl:
trait Foo {
fn bar<T:Baz>();
}
impl Foo for Boo {
fn bar<T:Baz>() { ... } // OK
// ^ remove Quux
}
This patch imports the relevant tests from #2687, as well as the test
case in #5886, which is fixed as well by this patch.
Closes#2687.
Closes#5886.
[breaking-change]
r? @pnkfelix
Platform-detection code from `configure` copied over to `install.sh`
in order to special case the lib dir being `bin` on Windows instead
of `lib`.
Short-term fix for #13810.
It's looking like we're still timing out all over the place with travis_wait
because the entire `make -j4 rustc-stage1` command is taking too long. Instead,
achieve roughly the same idea by just having `-Z time-passes` printing
information. We shouldn't have a pass that takes longer than 10 minutes in
isolation.
parameters.
This can break code that mistakenly used type parameters in place of
`Self`. For example, this will break:
trait Foo {
fn bar<X>(u: X) -> Self {
u
}
}
Change this code to not contain a type error. For example:
trait Foo {
fn bar<X>(_: X) -> Self {
self
}
}
Closes#15172.
[breaking-change]
`Bitv::new` has been renamed `Bitv::with_capacity`. The new function
`Bitv::new` now creates a `Bitv` with no elements.
The new function `BitvSet::with_capacity` creates a `BitvSet` with
a specified capacity.
On Bitv:
- Add .push() and .pop() which take and return bool, respectively
- Add .truncate() which truncates a Bitv to a specific length
- Add .grow() which grows a Bitv by a specific length
- Add .reserve() which grows the underlying storage to be able to hold
a specified number of bits without resizing
- Implement FromIterator<Vec<bool>>
- Implement Extendable<bool>
- Implement Collection
- Implement Mutable
- Remove .from_bools() since FromIterator<Vec<bool>> now accomplishes this.
- Remove .assign() since Clone::clone_from() accomplishes this.
On BitvSet:
- Add .reserve() which grows the underlying storage to be able to hold
a specified number of bits without resizing
- Add .get_ref() and .get_mut_ref() to return references to the
underlying Bitv
Add documentation to methods on BitvSet that were missing them. Also
make sure #[inline] is on all methods that are (a) one-liners or (b)
private methods whose only purpose is code deduplication.
Removes the following methods from `Bitv`:
- `to_vec`: translates a `Bitv` into a bulky `Vec<uint>` of 0's and 1's
replace with: `bitv.iter().map(|b| if b {1} else {0}).collect()`
- `to_bools`: translates a `Bitv` into a `Vec<bool>`
replace with: `bitv.iter().collect()`
- `ones`: internal iterator over all 1 bits in a `Bitv`
replace with: `BitvSet::from_bitv(bitv).iter().advance(fn)`
These methods had specific functionality which can be replicated more
generally by the modern iterator system. (Also `to_vec` was not even
unit tested!)
The argument passed to Vec::grow is the number of elements to grow
the vector by, not the target number of elements. The old `Bitv`
code did the wrong thing, allocating more memory than it needed to.
The internal masking behaviour for `Bitv` is now defined as:
- Any entirely words in self.storage must be all zeroes.
- Any partially used words may have anything at all in their
unused bits.
This means:
- When decreasing self.nbits, care must be taken that any
no-longer-used words are zeroed out.
- When increasing self.nbits, care must be taken that any
newly-unmasked bits are set to their correct values.
- When reading words, care should be taken that the values of
unused bits are not used. (Preferably, use `Bitv::mask_words`
which zeroes them out for you.)
The old behaviour was that every unused bit was always set to
zero. The problem with this is that unused bits are almost never
read, so forgetting to do this will result in very subtle and
hard-to-track down bugs. This way the responsibility for masking
falls on the places which might cause unused bits to be read: for
now, this is only `Bitv::mask_words` and `BitvSet::insert`.
The old `Bitv` structure had two variations: one represented by a vector of
uints, and another represented by a single uint for bit vectors containing
fewer than uint::BITS bits.
The purpose of this is to avoid the indirection of using a Vec, but the
speedup is only available to users who
(a) are storing less than uints::BITS bits
(b) know this when they create the vector (since `Bitv`s cannot be resized)
(c) don't know this at compile time (else they could use uint directly)
Giving such specific users a (questionable) speed benefit at the cost of
adding explicit checks to almost every single bit call, frequently writing
the same method twice and making iteration much much more difficult, does
not seem like a worthwhile tradeoff to me.
Also, rustc does not use Bitv anywhere, only through BitvSet, which does
not have this optimization.
For reference, here is some speed data from before and after this PR:
BEFORE:
test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1)
test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22)
test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35)
test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1)
test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0)
test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1)
test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662)
test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1)
test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
AFTER:
test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1)
test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102)
test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26)
test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0)
test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1)
test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1)
test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621)
test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5)
test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
Slice patterns are different from the rest in that a single slice pattern
does not have a distinct constructor if it contains a variable-length subslice
pattern. For example, the pattern [a, b, ..tail] can match a slice of length 2, 3, 4
and so on.
As a result, the decision tree for exhaustiveness and redundancy analysis should
explore each of those constructors separately to determine if the pattern could be useful
when specialized for any of them.
* channel() - #[unstable]. This will likely remain forever
* sync_channel(n: int) - #[unstable with comment]. Concerns have ben raised
about the usage of the term "synchronous channel" because that generally only
applies to the case where n == 0. If n > 0 then these channels are often
referred to as buffered channels.
* Sender::send(), SyncSender::send(), Receiver::recv() - #[experimental]. These
functions directly violate the general guideline of not providing a failing
and non-failing variant. These functions were explicitly selected for being
excused from this guideline, but recent discussions have cast doubt on that
decision. These functions are #[experimental] for now until a decision is made
as they are candidates for removal.
* Sender::send_opt(), SyncSender::send_opt(), Receiver::recv_opt() - #[unstable
with a comment]. If the above no-`_opt` functions are removed, these functions
will be renamed to the non-`_opt` variants.
* SyncSender::try_send(), Receiver::try_recv() - #[unstable with a comment].
These return types of these functions to not follow general conventions. They
are consistent with the rest of the api, but not with the rest of the
libraries. Until their return types are nailed down, these functions are
#[unstable].
* Receiver::iter() - #[unstable]. This will likely remain forever.
* std::com::select - #[experimental]. The functionality is likely to remain in
some form forever, but it is highly unlikely to remain in its current form. It
is unknown how much breakage this will cause if and when the api is
redesigned, so the entire module and its components are all experimental.
* DuplexStream - #[deprecated]. This type is not composable with other channels
in terms of selection or other expected locations. It can also not be used
with ChanWriter and ChanReader, for example. Due to it being only lightly
used, and easily replaced with two channels, this type is being deprecated and
slated for removal.
* Clone for {,Sync}Sender - #[unstable]. This will likely remain forever.
* channel() - #[unstable]. This will likely remain forever
* sync_channel(n: int) - #[unstable with comment]. Concerns have ben raised
about the usage of the term "synchronous channel" because that generally only
applies to the case where n == 0. If n > 0 then these channels are often
referred to as buffered channels.
* Sender::send(), SyncSender::send(), Receiver::recv() - #[experimental]. These
functions directly violate the general guideline of not providing a failing
and non-failing variant. These functions were explicitly selected for being
excused from this guideline, but recent discussions have cast doubt on that
decision. These functions are #[experimental] for now until a decision is made
as they are candidates for removal.
* Sender::send_opt(), SyncSender::send_opt(), Receiver::recv_opt() - #[unstable
with a comment]. If the above no-`_opt` functions are removed, these functions
will be renamed to the non-`_opt` variants.
* SyncSender::try_send(), Receiver::try_recv() - #[unstable with a comment].
These return types of these functions to not follow general conventions. They
are consistent with the rest of the api, but not with the rest of the
libraries. Until their return types are nailed down, these functions are
#[unstable].
* Receiver::iter() - #[unstable]. This will likely remain forever.
* std::com::select - #[experimental]. The functionality is likely to remain in
some form forever, but it is highly unlikely to remain in its current form. It
is unknown how much breakage this will cause if and when the api is
redesigned, so the entire module and its components are all experimental.
* DuplexStream - #[deprecated]. This type is not composable with other channels
in terms of selection or other expected locations. It can also not be used
with ChanWriter and ChanReader, for example. Due to it being only lightly
used, and easily replaced with two channels, this type is being deprecated and
slated for removal.
* Clone for {,Sync}Sender - #[unstable]. This will likely remain forever.
So far, type names generated for debuginfo where a bit sketchy. It was not clearly defined when a name should be fully qualified and when not, if region parameters should be shown or not, and other things like that.
This commit makes the debuginfo module responsible for creating type names instead of using ppaux::ty_to_str() and brings type names, as they show up in the DWARF information, in line with GCC and Clang:
* The name of the type being described is unqualified. It's path is defined by its position in the namespace hierarchy.
* Type arguments are always fully qualified, no matter if they would actually be in scope at the type definition location.
Care is also taken to reliably make type names consistent across crate boundaries. That is, the code now tries make the type name the same, regardless if the type is in the local crate or reconstructed from metadata. Otherwise LLVM will complain about violating the one-definition-rule when using link-time-optimization.
This commit also removes all source location information from type descriptions because these cannot be reconstructed for types instantiated from metadata. Again, with LTO enabled, this can lead to two versions of the debuginfo type description, one with and one without source location information, which then triggers the LLVM ODR assertion.
Fortunately, source location information about types is rarely used, so this has little impact. Once source location information is preserved in metadata (#1972) it can also be reenabled for type descriptions.
POSIX has recvfrom(2) and sendto(2), but their name seem not to be suitable with Rust. We already renamed getpeername(2) and getsockname(2), so I think it makes sense.
Alternatively, `receive_from` would be fine. However, we have `.recv()` so I chose `recv_from`.
What do you think? If this makes sense, should I provide `recvfrom` and `sendto` deprecated methods just calling new methods for compatibility?
While `HashMap::new` and `HashMap::with_capacity` were being initialized with a random `SipHasher`, it turns out that `HashMap::from_iter` was just using the default instance of `SipHasher`, which wasn't randomized. This closes that bug, and also inlines some important methods.
Being able to index into the bytes of a string encourages
poor UTF-8 hygiene. To get a view of `&[u8]` from either
a `String` or `&str` slice, use the `as_bytes()` method.
Closes#12710.
[breaking-change]
If the diffstat is any indication this shouldn't have a huge impact but it will have some. Most changes in the `str` and `path` module. A lot of the existing usages were in tests where ascii is expected. There are a number of other legit uses where the characters are known to be ascii.
with the corresponding trait parameter bounds.
This is a version of the patch in PR #12611 by Florian Hahn, modified to
address Niko's feedback.
It does not address the issue of duplicate type parameter bounds, nor
does it address the issue of implementation-defined methods that contain
*fewer* bounds than the trait, because Niko's review indicates that this
should not be necessary (and indeed I believe it is not). A test has
been added to ensure that this works.
This will break code like:
trait Foo {
fn bar<T:Baz>();
}
impl Foo for Boo {
fn bar<T:Baz + Quux>() { ... }
// ^~~~ ERROR
}
This will be rejected because the implementation requires *more* bounds
than the trait. It can be fixed by either adding the missing bound to
the trait:
trait Foo {
fn bar<T:Baz + Quux>();
// ^~~~
}
impl Foo for Boo {
fn bar<T:Baz + Quux>() { ... } // OK
}
Or by removing the bound from the impl:
trait Foo {
fn bar<T:Baz>();
}
impl Foo for Boo {
fn bar<T:Baz>() { ... } // OK
// ^ remove Quux
}
This patch imports the relevant tests from #2687, as well as the test
case in #5886, which is fixed as well by this patch.
Closes#2687.
Closes#5886.
[breaking-change]
It's looking like we're still timing out all over the place with travis_wait
because the entire `make -j4 rustc-stage1` command is taking too long. Instead,
achieve roughly the same idea by just having `-Z time-passes` printing
information. We shouldn't have a pass that takes longer than 10 minutes in
isolation.
Being able to index into the bytes of a string encourages
poor UTF-8 hygiene. To get a view of `&[u8]` from either
a `String` or `&str` slice, use the `as_bytes()` method.
Closes#12710.
[breaking-change]
Whew. So much here! Feedback very welcome.
This is the first part where we actually start learning things. I'd like to think I struck a good balance of explaining enough details, without getting too bogged down, and without being confusing... but of course I'd think that. 😉
As I mention in the commit comment, We probably want to move the guessing game to the rust-lang org, rather than just having it on my GitHub. Or, I could put the code inline. I think it'd be neat to have it as a project, so people can pull it down with Cargo. Until we make that decision, I'll just leave this here.