This is a small follow-up fix to the previous commit: I needed
to quote the right-hand side of the definition for the variable
MATCHES, to handle the case where there are more than one previously
installed libraries in the target directory.
Namely, switched in many places to using GNU make provided functions
for directory listing and text processing, rather than spawning a
shell process to do that work.
In the process of the revision, learned about Target-specific
variables, which were very applicable to INSTALL_LIB (which, on a
per-recipe basis, was always receiving the same actual arguments for
its first two formal parameters in every invocation).
http://www.gnu.org/software/make/manual/html_node/Target_002dspecific.html
(We might be able to make use of those in future refactorings.)
----
Also adds a cleanup pass to get-snapshot.py as well, since the same
problem arises when we unpack libraries from the snapshot archive into
a build directory with a prior snapshot's artifacts. (I put this step
into the python script rather than the makefile because I wanted to
delay the cleanup pass until after we have at least successfully
downloaded the tarball. That way, if the download fails, you should
not destroy the previous unarchived snapshot libraries and build
products.)
----
Also reverted whitespace changes to minimize diff.
I plan to put them back in in a dedicated commit elsewhere.
When building Rust libraries (e.g. librustc, libstd, etc), checks for
and verbosely removes previous build products before invoking rustc.
(Also, when Make variable VERBOSE is defined, it will list all of the
libraries matching the object library's glob after the rustc
invocation has completed.)
When installing Rust libraries, checks for previous libraries in
target install directory, but does not remove them.
The thinking behind these two different modes of operation is that the
installation target, unlike the build tree, is not under the control
of this infrastructure and it is not up to this Makefile to decide if
the previous libraries should be removed.
After getting an ICE trying to use the `Repr` enum from middle::trans::adt (see issue #7527), I tried to implement the missing case for struct-like enum variants in `middle::ty::enum_variants()`. It seems to work now (and passes make check) but there are still some uncertainties that bother me:
+ I'm not sure I did everything, right. Especially getting the variant constructor function from the variant node id is just copied from the tuple-variant case. Someone with more experience in the code base should be able to see rather quickly whether this OK so.
+ It is kind of strange that I could not reproduce the ICE with a smaller test case. The unimplemented code path never seems to be hit in most cases, even when using the exact same `Repr` enum, just with `ty::t` replaced by an opaque pointer. Also, within the `adt` module, `Repr` and matching on it is used multiple times, again without running into problems. Can anyone explain why this is the case? That would be much appreciated.
Apart from that, I hope this PR is useful.
This a followup to #7510. @catamorphism requested a test - so I have created one, but in doing so I noticed some inconsistency in the error messages resulting from referencing nonexistent traits, so I changed the messages to be more consistent.
Change the signature of Iterator.size_hint() to always have a lower bound.
Implement .size_hint() on all remaining iterators (if it differs from the default).
Fix an assertion in grow when using add_front.
Speeds up grow (and the functions that use it) by a lot. We retain the vector instead of creating it anew for each grow.
The struct field hi is removed since it is redundant, and all iterators are updated to use a representation closer to the Deque itself, it should make it work even if deque sizes are not powers of two.
Deque::with_capacity is also implemented, but .reserve() is not yet done.
They simply byte-swap an integer to a specific endian, like the hton* functions in C.
These intrinsics are synthesized, so maybe they should be in another file. But since they are just a single line of code each, based on the bswap intrinsics and aren't really intended for public consumption I thought they would fit in the intrinsics file.
The next step working on this could be to expose a trait / generic function for byteswapping.
The deque is split at the marker lo, or logical index 0. Move the
shortest part (split by lo) when growing. This way add_front is just as
fast as add_back, on average.
The previous implementation of reverse iterators used modulus (%) of
negative indices, which did work but was fragile due to dependency on
the divisor.
The deque is determined by vec self.elts.len(), self.nelts, and self.lo,
and self.hi is calculated from these.
self.hi is just the raw index of element number `self.nelts`
Fix some issues with the deque being very slow, keep the same vec around
instead of constructing a new. Move as few elements as possible, so the
self.lo point is not moved after grow.
[o o o o o|o o o]
hi...^ ^.... lo
grows to
[. . . . .|o o o o o o o o|. . .]
^.. lo ^.. hi
If the deque is append-only, it will result in moving no elements on
grow. If the deque is prepend-only, all will be moved each time.
The bench tests added show big improvements:
Timed using `rust build -O --test extra.rs && ./extra --bench deque`
Old version:
test deque::tests::bench_add_back ... bench: 4976 ns/iter (+/- 9)
test deque::tests::bench_add_front ... bench: 4108 ns/iter (+/- 18)
test deque::tests::bench_grow ... bench: 416964 ns/iter (+/- 4197)
test deque::tests::bench_new ... bench: 408 ns/iter (+/- 12)
With this commit:
test deque::tests::bench_add_back ... bench: 12 ns/iter (+/- 0)
test deque::tests::bench_add_front ... bench: 16 ns/iter (+/- 0)
test deque::tests::bench_grow ... bench: 1515 ns/iter (+/- 30)
test deque::tests::bench_new ... bench: 419 ns/iter (+/- 3)
Add the x64 windows target to platform.mk and reorder some headers to fix build on mingw64. There are still some issues with building llvm but this gets us one step closer.
The Base64 package previously had extremely basic functionality. It only
suported the standard encoding character set, didn't support line breaks
and always padded output. This commit makes it significantly more
powerful.
The FromBase64 impl now supports all of the standard variants of Base64.
It ignores newlines,interprets '-' and '_' as well as '+' and '/' and
doesn't require padding. It isn't incredibly pedantic and will
successfully parse strings that are not strictly valid, but I don't
think the extra complexity required to make it accept _only_ valid
strings is worth it.
The ToBase64 trait has been modified such that to_base64 now takes a
base64::Config struct which contains the output format configuration.
This currently includes the selection of character set (standard or
url safe), whether or not to pad and an optional line break width. The
package comes with three static Config structs for the RFC 4648
standard, RFC 4648 url safe and RFC 2045 MIME formats.
The other option for configuring ToBase64 output would be to have one
method with the configuration flags passed and other traits with default
impls for the common cases, but I think that's a little messier.
FromBase64 still kills the task if you pass it invalid input, which isn't
particularly appropriate for a function into which you'll be passing
unvalidated input. Would it be worth changing its signature to return a
Result?
There's now an enum to pick the character set instead of a url_safe
bool.
from_base64 now returns a Result<~[u8], ~str> and returns an Err instead
of killing the task when it is called on invalid input.
Fixed documentation examples.
The Base64 package previously had extremely basic functionality. It only
suported the standard encoding character set, didn't support line breaks
and always padded output. This commit makes it significantly more
powerful.
The FromBase64 impl now supports all of the standard variants of Base64.
It ignores newlines,interprets '-' and '_' as well as '+' and '/' and
doesn't require padding. It isn't incredibly pedantic and will
successfully parse strings that are not strictly valid, but I don't
think the extra complexity required to make it accept _only_ valid
strings is worth it.
The ToBase64 trait has been modified such that to_base64 now takes a
base64::Config struct which contains the output format configuration.
This currently includes the selection of character set (standard or
url safe), whether or not to pad and an optional line break width. The
package comes with three static Config structs for the RFC 4648
standard, RFC 4648 url safe and RFC 2045 MIME formats.
The other option for configuring ToBase64 output would be to have one
method with the configuration flags passed and other traits with default
impls for the common cases, but I think that's a little messier.