Renamed the invert() function in iter.rs to flip().
Also renamed the Invert<T> type to Flip<T>.
Some related code comments changed. Documentation that I could find has
been updated, and all the instances I could locate where the
function/type were called have been updated as well.
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
Unique pointers and vectors currently contain a reference counting
header when containing a managed pointer.
This `{ ref_count, type_desc, prev, next }` header is not necessary and
not a sensible foundation for tracing. It adds needless complexity to
library code and is responsible for breakage in places where the branch
has been left out.
The `borrow_offset` field can now be removed from `TyDesc` along with
the associated handling in the compiler.
Closes#9510Closes#11533
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
This commit uniforms the short title of modules provided by libstd,
in order to make their roles more explicit when glancing at the index.
Signed-off-by: Luca Bruno <lucab@debian.org>
This uses quite a bit of unsafe code for speed and failure safety, and allocates `2*n` temporary storage.
[Performance](https://gist.github.com/huonw/5547f2478380288a28c2):
| n | new | priority_queue | quick3 |
|-------:|---------:|---------------:|---------:|
| 5 | 200 | 155 | 106 |
| 100 | 6490 | 8750 | 5810 |
| 10000 | 1300000 | 1790000 | 1060000 |
| 100000 | 16700000 | 23600000 | 12700000 |
| sorted | 520000 | 1380000 | 53900000 |
| trend | 1310000 | 1690000 | 1100000 |
(The times are in nanoseconds, having subtracted the set-up time (i.e. the `just_generate` bench target).)
I imagine that there is still significant room for improvement, particularly because both priority_queue and quick3 are doing a static call via `Ord` or `TotalOrd` for the comparisons, while this is using a (boxed) closure.
Also, this code does not `clone`, unlike `quick_sort3`; and is stable, unlike both of the others.
Update the next() method to just return self.v in the case that we've reached
the last element that the iterator will yield. This produces equivalent
behavior as before, but without the cost of updating the field.
Update the size_hint() method to return a better hint now that #9629 is fixed.
very small runs.
This uses a lot of unsafe code for speed, otherwise we would be having
to sort by sorting lists of indices and then do a pile of swaps to put
everything in the correct place.
Fixes#9819.
Also, add `.remove_opt` and replace `.unshift` with `.remove(0)`. The
code size reduction seem to compensate for not having the optimised
special cases.
This makes the included benchmark more than 3 times faster.
This makes the included benchmark more than 3 times faster. Also,
`.unshift(x)` is now faster as `.insert(0, x)` which can reuse the
allocation if necessary.
The removal of the aliasing &mut[] and &[] from `shift_opt` also comes with its simplification.
The above also allows the use of `copy_nonoverlapping_memory` in `[].copy_memory` (I did an audit of each use of `.copy_memory` and `std::vec::bytes::copy_memory`, and I believe none of them are called with arguments can ever alias). This changes requires that `unsafe` code using `copy_memory` **needs** to respect the aliasing rules of `&mut[]`.
docs for copy_memory.
&mut [u8] and &[u8] really shouldn't be overlapping at all (part of the
uniqueness/aliasing guarantee of &mut), so no point in encouraging it.
This method is the mutable version of ImmutableVector::split. It is
a DoubleEndedIterator, making mut_rsplit irrelevent. The size_hint
method is not optimal because of #9629.
At the same time, clarify *split* iterator doc.
mut_chunks() returns an iterator that produces mutable slices. This is the
mutable version of the existing chunks() method on the ImmutableVector trait.
New benchmark tests in vec.rs:
`push`, `starts_with_same_vector`, `starts_with_single_element`,
`starts_with_diff_one_element_end`, `ends_with_same_vector`,
`ends_with_single_element`, `ends_with_diff_one_element_beginning` and
`contains_last_element`
This adds `get_opt` to `std::vec`, which looks up an item by index and returns an `Option`. If the given index is out of range, `None` will be returned, otherwise a `Some`-wrapped item will be returned.
Example use case:
```rust
use std::os;
fn say_hello(name: &str) {
println(fmt!("Hello, %s", name));
}
fn main(){
// Try to get the first cmd line arg, but default to "World"
let args = os::args();
let default = ~"World";
say_hello(*args.get_opt(1).unwrap_or(&default));
}
```
If there's an existing way of implementing this pattern that's cleaner, I'll happily close this. I'm also open to naming suggestions (`index_opt`?)
std::vec: Sane implementations for connect_vec and concat_vec
Avoid unnecessary copying of subvectors, and calculate the needed space
beforehand. These implementations are simple but better than the
previous.
Also only implement it once, for all `Vector<T>` using:
impl<'self, T: Clone, V: Vector<T>> VectorVector<T> for &'self [V]
Closes#9581