This is mostly for consistency, as you can now compare raw pointers in
constant expressions or without the standard library.
It also reduces the number of `ptrtoint` instructions in the IR, making
tracking down culprits of what's usually an anti-pattern easier.
The purpose of this macro is to further reduce the number of allocations which
occur when dealing with formatting strings. This macro will perform all of the
static analysis necessary to validate that a format string is safe, and then it
will wrap up the "format string" into an opaque struct which can then be passed
around.
Two safe functions are added (write/format) which take this opaque argument
structure, unwrap it, and then call the unsafe version of write/format (in an
unsafe block). Other than these two functions, it is not intended for anyone to
ever look inside this opaque struct.
The macro looks a bit odd, but mostly because of rvalue lifetimes this is the
only way for it to be safe that I know of.
Example use-cases of this are:
* third-party libraries can use the default formatting syntax without any
forced allocations
* the fail!() macro can avoid allocating the format string
* the logging macros can avoid allocation any strings
I plan on transitioning the standard logging/failing to using these macros soon. This is currently blocking on inner statics being usable in cross-crate situations (still tracking down bugs there), but this will hopefully be coming soon!
Additionally, I'd rather settle on a name now than later, so if anyone has a better suggestion other than `format_args`, I'm not attached to the name at all :)
The purpose of this macro is to further reduce the number of allocations which
occur when dealing with formatting strings. This macro will perform all of the
static analysis necessary to validate that a format string is safe, and then it
will wrap up the "format string" into an opaque struct which can then be passed
around.
Two safe functions are added (write/format) which take this opaque argument
structure, unwrap it, and then call the unsafe version of write/format (in an
unsafe block). Other than these two functions, it is not intended for anyone to
ever look inside this opaque struct.
The macro looks a bit odd, but mostly because of rvalue lifetimes this is the
only way for it to be safe that I know of.
Example use-cases of this are:
* third-party libraries can use the default formatting syntax without any
forced allocations
* the fail!() macro can avoid allocating the format string
* the logging macros can avoid allocation any strings
This is mostly for consistency, as you can now compare raw pointers in
constant expressions or without the standard library.
It also reduces the number of `ptrtoint` instructions in the IR, making
tracking down culprits of what's usually an anti-pattern easier.
The default buffer size is the same as the one in Java's BufferedWriter.
We may want BufferedWriter to have a Drop impl that flushes, but that
isn't possible right now due to #4252/#4430. This would be a bit
awkward due to the possibility of the inner flush failing. For what it's
worth, Java's BufferedReader doesn't have a flushing finalizer, but that
may just be because Java's finalizer support is awful.
The current implementation of BufferedStream is weird in my opinion, but
it's what the discussion in #8953 settled on.
I wrote a custom copy function since vec::copy_from doesn't optimize as
well as I would like.
Closes#8953
The default buffer size is the same as the one in Java's BufferedWriter.
We may want BufferedWriter to have a Drop impl that flushes, but that
isn't possible right now due to #4252/#4430. This would be a bit
awkward due to the possibility of the inner flush failing. For what it's
worth, Java's BufferedReader doesn't have a flushing finalizer, but that
may just be because Java's finalizer support is awful.
Closes#8953
Visit the free functions of std::vec and reimplement or remove some. Most prominently, remove `each_permutation` and replace with two iterators, ElementSwaps and Permutations.
Replace unzip, unzip_slice with an updated `unzip` that works with an iterator argument.
Replace each_permutation with a Permutation iterator. The new permutation iterator is more efficient since it uses an algorithm that produces permutations in an order where each is only one element swap apart, including swapping back to the original state with one swap at the end.
Unify the seldomly used functions `build`, `build_sized`, `build_sized_opt` into just one function `build`.
Remove `equal_sizes`
These functions have very few users since they are mostly replaced by
iterator-based constructions.
Convert a few remaining users in-tree, and reduce the number of
functions by basically renaming build_sized_opt to build, and removing
the other two. This for both the vec and the at_vec versions.
The basic construct x.len() == y.len() is just as simple.
This function used to be a precondition (not sure about the
terminology), so it had to be a function. This is not relevant any more.
Update for a lot of changes (not many free functions left), add examples
of the important methods `slice` and `push`, and write a short bit about
iteration.
Introduce ElementSwaps and Permutations. ElementSwaps is an iterator
that for a given sequence length yields the element swaps needed
to visit each possible permutation of the sequence in turn.
We use an algorithm that generates a sequence such that each permutation
is only one swap apart.
let mut v = [1, 2, 3];
for perm in v.permutations_iter() {
// yields 1 2 3 | 1 3 2 | 3 1 2 | 3 2 1 | 2 3 1 | 2 1 3
}
The `.permutations_iter()` yields clones of the input vector for each
permutation.
If a copyless traversal is needed, it can be constructed with
`ElementSwaps`:
for (a, b) in ElementSwaps::new(3) {
// yields (2, 1), (1, 0), (2, 1) ...
v.swap(a, b);
// ..
}
This is a patch to fix#6031. I didn't see any tests for the C++ library code, so I didn't write a test for my changes. Did I miss something, or are there really no tests?
This allows cross-crate inlining which is *very* good because this is called a
lot throughout libstd (even when libstd is inlined across crates).
In one of my projects, I have a test case with the following performance characteristics
commit | optimization level | runtime (seconds)
----|------|----
before | O2 | 22s
before | O3 | 107s
after | O2 | 13s
after | O3 | 12s
I'm a bit disturbed by the 107s runtime from O3 before this commit. The performance characteristics of this test involve doing an absurd amount of small operations. A huge portion of this is creating hashmaps which involves allocating vectors.
The worst portions of the profile are:
![screen shot 2013-09-06 at 10 32 15 pm](https://f.cloud.github.com/assets/64996/1100723/e5e8744c-177e-11e3-83fc-ddc5f18c60f9.png)
Which as you can see looks like some *serious* problems with inlining. I would expect the hash map methods to be high up in the profile, but the top 9 callers of `cast::transmute_copy` were `Repr::repr`'s various monomorphized instances.
I wish there we a better way to detect things like this in the future, and it's unfortunate that this is required for performance in the first place. I suppose I'm not entirely sure why this is needed because all of the methods should have been generated in-crate (monomorphized versions of library functions), so they should have gotten inlined? It also could just be that by modifying LLVM's idea of the inline cost of this function it was able to inline it in many more locations.
Here's a fix for issue #7588, "Overflow handling of from_str methods is broken".
The integer overflow issues are taken care of by checking to see if the multiply-by-radix-and-add-next-digit process is reversible. If it overflowed, then some information is lost and the process is irreversible, in which case, None is returned.
Floats now consistently return Some(Inf) of Some(-Inf) on overflow thanks to a call to NumStrConv::inf() and NumStrConv::neg_inf() respectively when the overflow is detected (which yields a value of None in the case of ints and uints anyway).
This is my first contribution to Rust, and my first time using the language in general, so any and all feedback is appreciated.
This is a reopening of the libuv-upgrade part of #8645. Hopefully this won't
cause random segfaults all over the place. The windows regression in testing
should also be fixed (it shouldn't build the whole compiler twice).
A notable difference from before is that gyp is now a git submodule instead of
always git-cloned at make time. This allows bundling for releases more easily.
Closes#8850
Also redefine all of the standard logging macros to use more rust code instead
of custom LLVM translation code. This makes them a bit easier to understand, but
also more flexibile for future types of logging.
Additionally, this commit removes the LogType language item in preparation for
changing how logging is performed.
The trait will keep the `Iterator` naming, but a more concise module
name makes using the free functions less verbose. The module will define
iterables in addition to iterators, as it deals with iteration in
general.
The trait will keep the `Iterator` naming, but a more concise module
name makes using the free functions less verbose. The module will define
iterables in addition to iterators, as it deals with iteration in
general.