This adds the following methods to ints and uints:
- div
- modulo
- div_mod
- quot_rem
- gcd
- lcm
- divisible_by
- is_even
- is_odd
I have not implemented Natural for BigInt and BigUInt because they're a little over my head.
This also reverts some changes to TLS that were leaking memory.
Conflicts:
src/libcore/rt/uv/net.rs
src/libcore/task/local_data_priv.rs
src/libcore/unstable/lang.rs
A task without an unwinder will abort the process on failure.
I'm using this in the runtime tests to guarantee that a call to
`assert!` actually triggers some kind of failure (an abort)
instead of silently doing nothing. This is essentially in lieu
of a working linked failure implementation.
This fixes#5976.
It also removes `os::waitpid` in favour of (the existing) `run::waitpid`. I included this change because I figured it is kind of related.
r?
- The return value meant different things on different
platforms (on windows, it was the exit code, on unix it was
the status information returned from waitpid).
- It was undocumented.
- There also exists run::waitpid, which does much the same
thing but has a more consistent return value and also some
documentation.
Closes#3083.
This takes a similar approach to #5797 where a set is present on the `tcx` of used mutable definitions. Everything is by default warned about, and analyses must explicitly add mutable definitions to this set so they're not warned about.
Most of this was pretty straightforward, although there was one caveat that I ran into when implementing it. Apparently when the old modes are used (or maybe `legacy_modes`, I'm not sure) some different code paths are taken to cause spurious warnings to be issued which shouldn't be issued. I'm not really sure how modes even worked, so I was having a lot of trouble tracking this down. I figured that because they're a legacy thing that I'd just de-mode the compiler so that the warnings wouldn't be a problem anymore (or at least for the compiler).
Other than that, the entire compiler compiles without warnings of unused mutable variables. To prevent bad warnings, #5965 should be landed (which in turn is waiting on #5963) before landing this. I figured I'd stick it out for review anyway though.
use core::cell;
fn main() {
let x = cell::Cell(Some(~"foo"));
let y = x.value.get_ref().get_ref();
do x.with_mut_ref |z| { *z = None; }
println(*y) // boom!
}
Things like the GC heap and unwinding are desirable everywhere the language
might be used, not just in tasks. All Rust code should have access to
LocalServices.
This renaming, proposed in the [Numeric Bikeshed](https://github.com/mozilla/rust/wiki/Bikeshed-Numeric-Traits#rename-modulo-into-rem-or-remainder-in-traits-and-docs), will allow us to implement div and and modulo methods that follow the conventional mathematical definitions for negative numbers without altering the definitions of the operators (and confusing systems programmers). Here is a useful answer on StackOverflow that explains the difference between `div`/`mod` and `quot`/`rem` in Haskell: (When is the difference between quotRem and divMod useful?)[http://stackoverflow.com/a/339823/679485].
This is part of the numeric trait reforms tracked in issue #4819.
As the name suggests this replaces many instances of cast::reinterpret_cast by cast::transmute. It's essentially the boring part of fixing #5163, the remaining reinterpret_casts should be more tricky to remove (unless I missed a boring case).
r? @catamorphism
This implements the fixed_stack_segment for items with the rust-intrinsic abi, and then uses it to make f32 and f64 use intrinsics where appropriate, but without overflowing stacks and killing canaries (cf. #5686 and #5697). Hopefully.
@pcwalton, the fixed_stack_segment implementation involved mirroring its implementation in `base.rs` in `trans_closure`, but without adding the `set_no_inline` (reasoning: that would defeat the purpose of intrinsics), which is possibly incorrect.
I'm a little hazy about how the underlying structure works, so I've annotated the 4 that have caused problems so far, but there's no guarantee that the other intrinsics are entirely well-behaved.
Anyway, it has good results (the following are just summing the result of each function for 1 up to 100 million):
```
$ ./intrinsics-perf.sh f32
func new old speedup
sin 0.80 2.75 3.44
cos 0.80 2.76 3.45
sqrt 0.56 2.73 4.88
ln 1.01 2.94 2.91
log10 0.97 2.90 2.99
log2 1.01 2.95 2.92
exp 0.90 2.85 3.17
exp2 0.92 2.87 3.12
pow 6.95 8.57 1.23
geometric mean: 2.97
$ ./intrinsics-perf.sh f64
func new old speedup
sin 12.08 14.06 1.16
cos 12.04 13.67 1.14
sqrt 0.49 2.73 5.57
ln 4.11 5.59 1.36
log10 5.09 6.54 1.28
log2 2.78 5.10 1.83
exp 2.00 3.97 1.99
exp2 1.71 3.71 2.17
pow 5.90 7.51 1.27
geometric mean: 1.72
```
So about 3x faster on average for f32, and 1.7x for f64. This isn't exactly apples to apples though, since this patch also adds #[inline(always)] to all the function definitions too, which possibly gives a speedup.
(fwiw, GitHub is showing 93c0888 after d9c54f8 (since I cherry-picked the latter from #5697), but git's order is the other way.)
Achieves at least 5x speed up for some functions!
Also, reorganise the delegation code so that the delegated function wrappers
have the #[inline(always)] annotation, and reduce the repetition of
delegate!(..).
@thestinger r?
~~The 2 `_unlimited` functions are marked `unsafe` since they may not terminate.~~
The `state` fields of the `Unfoldr` and `Scan` iterators are public, since being able to access the final state after the iteration has finished seems reasonable/possibly useful.
~~Lastly, I converted the tests to use `.to_vec`, which halves the amount of code for them, but it means that a `.transform(|x| *x)` call is required on each iterator.~~
(removed the 2 commits with `to_vec` and `foldl`.)
This allows one to write
```rust
let x = function_with_complicated_return_type();
let size = size_of_val(&x);
```
instead of
```rust
let x = function_with_complicated_return_type();
let size = size_of::<ComplicatedReturnType<Foo, Bar>>();
```
This closes#4364. I came into rust after modes had begun to be phased out, so I'm not exactly sure what they all did. My strategy was basically to turn on the compilation warnings and then when everything compiles and passes all the tests it's all good.
In most cases, I just dropped the mode, but in others I converted things to use `&` pointers when otherwise a move would happen.
This depends on #5963. When running the tests, everything passed except for a few compile-fail tests. These tests leaked memory, causing the task to abort differently. By suppressing the ICE from #5963, no leaks happen and the tests all pass. I would have looked into where the leaks were coming from, but I wasn't sure where or how to debug them (I found `RUSTRT_TRACK_ALLOCATIONS`, but it wasn't all that useful).
This switches the unicode functions in core to use static character-range tables and a binary search helper rather than open-coded switch statements. It adds about 50k of read only data to the libcore binary but cuts out a similar amount of compiled IR. Would have done it this way in the first place but we didn't have structured statics for a long time.
vec::windowed fails if given window size is greater than vector length + 1.
```rust
for vec::windowed(7, &[1,2,3,4,5,6]) |vs| { fail!(); } // => do nothing
for vec::windowed(8, &[1,2,3,4,5,6]) |vs| { fail!(); } // => assertion failure in vec::slice
```
It will check which scheduler it is running under and create the
correct type of task as appropriate. Most options aren't supported
but basic spawning works.
`read_until` is just doing a bytewise comparison. This means the following program prints `xyå12`, not `xy`, which it should if it was actually checking chars.
```rust
fn main() {
do io::with_str_reader("xyå12") |r| {
io::println(r.read_until('å', false));
}
}
```
This patch makes the type of read_until match what it is actually doing.
This is just a bunch of minor changes and simplifications to the structure of core::rt. It makes ownership of the ~Scheduler more strict (though it is still mutably aliased sometimes), turns the scheduler cleanup_jobs vector into just a single job, shunts the thread-local scheduler code off to its own file.
r? @nikomatsakis
This doesn't completely fix the x86 ABI for structs, but it does fix some cases. On linux, structs appear to be returned correctly now. On windows, structs are only returned by pointer when they are greater than 8 bytes. That scenario works now.
In the case where the struct is less than 8 bytes our generated code looks peculiar. When returning a pair of u16, C packs both variables into %eax to return them. Our generated code though expects to find one of the pair in %ax and the other in %dx. Similar for u8. I haven't looked into it yet.
There appears to also be struct passing problems on linux, where my `extern-pass-TwoU8s` and `extern-pass-TwoU16s` tests are failing.
This Adds a bunch of tests for passing and returning structs
of various sizes to C. It fixes the struct return rules on unix,
and on windows for structs of size > 8 bytes. Struct passing
on unix for structs under a certain size appears to still be broken.
In order to do a context switch you have to give up ownership of the scheduler,
effectively passing it to the next execution context. This could help avoid
some situations here tasks retain unsafe pointers to schedulers between context
switches, across which they may have changed threads.
There are still a number of uses of unsafe scheduler pointers.
Closes#5487, #1913, and #4568
I tracked this by adding all used unsafe blocks/functions to a set on the `tcx` passed around, and then when the lint pass comes around if an unsafe block/function isn't listed in that set, it's unused.
I also removed everything from the compiler that was unused, and up to stage2 is now compiling without any known unused unsafe blocks.
I chose `unused_unsafe` as the name of the lint attribute, but there may be a better name...
This takes care of one of the last remnants of assumptions about enum layout. A type visitor is now passed a function to read a value's discriminant, then accesses fields by being passed a byte offset for each one. The latter may not be fully general, despite the constraints imposed on representations by borrowed pointers, but works for any representations currently planned and is relatively simple.
Closes#5652.
This is a follow-up to #5761. Its purpose is to make core::libc more consistent - it currently only defines SIGKILL and SIGTERM, because they are the only values that happen to be needed by libcore.
This adds all the posix signal value constants, except for those that have different values on different architectures.
The output of the command `man 7 signal` was used to compile these signal values.
value constants, except for those that have different values
on different architectures.
The output of the command `man 7 signal` was used to
compile these signal values.
The foldl based implementation allocates lots of unneeded vectors.
iter::map_to_vec is already optimized to avoid these.
One place that benefits quite a lot from this is the metadata decoder, helping with compile times for tiny programs.
The current protocol is very comparable to Python, where `.__iter__()` returns an iterator object which implements `.__next__()` and throws `StopIteration` on completion. `Option` is much cleaner than using a exceptions as a flow control hack though. It requires that the container is frozen so there's no worry about invalidating them.
Advantages over internal iterators, which are functions that are passed closures and directly implement the iteration protocol:
* Iteration is stateful, so you can interleave iteration over arbitrary containers. That's needed to implement algorithms like zip, merge, set union, set intersection, set difference and symmetric difference. I already used this internally in the `TreeMap` and `TreeSet` implementations, but regions and traits weren't solid enough to make it generic yet.
* They provide a universal, generic interface. The same trait is used for a forward/reverse iterator, an iterator over a range, etc. Internal iterators end up resulting in a trait for each possible way you could iterate.
* They can be composed with adaptors like `ZipIterator`, which also implement the same trait themselves.
The disadvantage is that they're a pain to write without support from the compiler for compiling something like `yield` to a state machine. :)
This can coexist alongside internal iterators since both can use the current `for` protocol. It's easier to write an internal iterator, but external ones are far more powerful/useful so they should probably be provided whenever possible by the standard library.
## Current issues
#5801 is somewhat annoying since explicit type hints are required.
I just wanted to get the essentials working well, so I haven't put much thought into making the naming concise (free functions vs. static `new` methods, etc.).
Making an `Iterable` trait seems like it will have to be a long-term goal, requiring type system extensions. At least without resorting to objects which would probably be unacceptably slow.
This restores the trait that was lost in 216e85fadf. It will eventually be broken up into a more fine-grained trait hierarchy in the future once a design can be agreed upon.
As proposed in issue #5632.
I added some new stuff to libc - hopefully correctly. I only added a single signal constant (SIGKILL) because adding more seems complicated by differences between platforms - and since it is not required for issue #5632 then I figure that I can use a further pull request to flesh out the SIG* constants more.
This refactors much of the ast generation required for `deriving` instances into a common interface, so that new instances only need to specify what they do with the actual data, rather than worry about naming function arguments and extracting fields from structs and enum. (This all happens in `generic.rs`. I've tried to make sure it was well commented and explained, since it's a little abstract at points, but I'm sure it's still a little confusing.)
It makes instances like the comparison traits and `Clone` short and easy to write.
Caveats:
- Not surprisingly, this slows the expansion pass (in some cases, dramatically, specifically deriving Ord or TotalOrd on enums with many variants). However, this shouldn't be too concerning, since in a more realistic case (compiling `core.rc`) the time increased by 0.01s, which isn't worth mentioning. And, it possibly slows type checking very slightly (about 2% worst case), but I'm having trouble measuring it (and I don't understand why this would happen). I think this could be resolved by using traits and encoding it all in the type system so that monomorphisation handles everything, but that would probably be a little tricky to arrange nicely, reduce flexibility and make compiling rustc take longer. (Maybe some judicious use of `#[inline(always)]` would help too; I'll have a bit of a play with it.)
- The abstraction is not currently powerful enough for:
- `IterBytes`: doesn't support arguments of type other than `&Self`.
- `Encodable`/`Decodable` (#5090): doesn't support traits with parameters.
- `Rand` & `FromStr`; doesn't support static functions and arguments of type other than `&Self`.
- `ToStr`: I don't think it supports returning `~str` yet, but I haven't actually tried.
(The last 3 are traits that might be nice to have: the derived `ToStr`/`FromStr` could just read/write the same format as `fmt!("%?", x)`, like `Show` and `Read` in Haskell.)
I have ideas to resolve all of these, but I feel like it would essentially be a simpler version of the `mt` & `ty_` parts of `ast.rs`, and I'm not sure if the simplification is worth having 2 copies of similar code.
Also, makes Ord, TotalOrd and TotalEq derivable (closes#4269, #5588 and #5589), although a snapshot is required before they can be used in the rust repo.
If there is anything that is unclear (or incorrect) either here or in the code, I'd like to get it pointed out now, so I can explain/fix it while I'm still intimately familiar with the code.
This adds an example for most of the methods in Rng.
As a total newcomer to Rust, it took a while to figure out how to do basic things like use library functions, because there aren't many usage examples, and most examples that Google turns up are out of date. Something like this would have saved me a bit of time.
This might be a bit verbose. Some alternative options would be to consolidate all the examples into one section, or to only have code for the specific function call inline.
signature. In a nutshell, the idea is to (1) report an error if, for
a region pointer `'a T`, the lifetime `'a` is longer than any
lifetimes that appear in `T` (in other words, if a borrowed pointer
outlives any portion of its contents) and then (2) use this to assume
that in a function like `fn(self: &'a &'b T)`, the relationship `'a <=
'b` holds. This is needed for #5656. Fixes#5728.
rather than a tuple. The current setup iterates over
`BaseIter<(&'self K, &'self V)>` where 'self is a lifetime declared
*in the each method*. You can't place such a type in
the impl declaration. The compiler currently allows it,
but this will not be legal under #5656 and I'm pretty sure
it's not sound now.
It was simpler to just give the variants a value instead of listing out all the cases for (*self, *other) in a match statement or writing spaghetti code. This makes the `cmp` method easier to use with FFI too, since you're a cast away from an idiomatic C comparator function. It would be fine implemented another way though.
This removes some of the easier instances of mutable fields where the explicit self can just become `&mut self` along with removing some unsafe blocks which aren't necessary any more now that purity is gone.
Most of #4568 is done, except for [one case](https://github.com/alexcrichton/rust/blob/less-mut-fields/src/libcore/vec.rs#L1754) where it looks like it has to do with it being a `const` vector. Removing the unsafe block yields:
```
/Users/alex/code/rust2/src/libcore/vec.rs:1755:12: 1755:16 error: illegal borrow unless pure: creating immutable alias to const vec content
/Users/alex/code/rust2/src/libcore/vec.rs:1755 for self.each |e| {
^~~~
/Users/alex/code/rust2/src/libcore/vec.rs:1757:8: 1757:9 note: impure due to access to impure function
/Users/alex/code/rust2/src/libcore/vec.rs:1757 }
^
error: aborting due to previous error
```
I also didn't delve too much into removing mutable fields with `Cell` or `transmute` and friends.
Performing a deep copy isn't ever desired for a persistent data
structure, and it requires a more complex implementation to do
correctly. A deep copy needs to check for cycles to avoid an infinite
loop.