`Zero` and `One` have precise definitions in mathematics as the identities of the `Add` and `Mul` operations respectively. As such, types that implement these identities are now also required to implement their respective operator traits. This should reduce their misuse whilst still enabling them to be used in generalized algebraic structures (not just numbers). Existing usages of `#[deriving(Zero)]` in client code could break under these new rules, but this is probably a sign that they should have been using something like `#[deriving(Default)]` in the first place.
For more information regarding the mathematical definitions of the additive and multiplicative identities, see the following Wikipedia articles:
- http://wikipedia.org/wiki/Additive_identity
- http://wikipedia.org/wiki/Multiplicative_identity
Note that for floating point numbers the laws specified in the doc comments of `Zero::zero` and `One::one` may not always hold. This is true however for many other traits currently implemented by floating point numbers. What traits floating point numbers should and should not implement is an open question that is beyond the scope of this pull request.
The implementation of `std::num::pow` has been made more succinct and no longer requires `Clone`. The coverage of the associated unit test has also been increased to test for more combinations of bases, exponents, and expected results.
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
The implementation has been made more succinct and no longer requires Clone. The coverage of the associated unit test has also been increased to check more combinations of bases, exponents, and expected results.
Ignore all newline characters in Base64 decoder to make it compatible with other Base64 decoders.
Most of the Base64 decoder implementations ignore all newline characters in the input string. There are some examples:
Python:
```python
>>> "
A
Q
=
=
".decode("base64")
'\x01'
```
Ruby:
```ruby
irb(main):001:0> "
A
Q
=
=
".unpack("m")
=> [""]
```
Erlang:
```erlang
1> base64:decode("
A
Q
=
=
").
<<1>>
```
Moreover some Base64 encoders append newline character at the end of the output string by default:
Python:
```python
>>> "".encode("base64")
'AQ==
'
```
Ruby:
```ruby
irb(main):001:0> [""].pack("m")
=> "AQ==
"
```
So I think it's fairly important for Rust Base64 decoder to accept Base64 inputs even with newline characters in the padding.
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one being removed:
```
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)
```
I update the example of json use to the last update of the json.rs code. I delete the old branch.
From my last request, I remove the example3 because it doesn't compile. I don't understand why and I don't have the time now to investigate.
This means that compilation continues for longer, and so we can see more
errors per compile. This is mildly more user-friendly because it stops
users having to run rustc n times to see n macro errors: just run it
once to see all of them.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This is my first patch so feedback appreciated!
Bug when initialising `bitv:Bitv::new(int,bool)` when `bool=true`. It created a `Bitv` with underlying representation `!0u` rather than the actual desired bit layout ( e.g. `11111111` instead of `00001111`). This works OK because a size attribute is included which keeps access to legal bounds. However when using `BitvSet::from_bitv(Bitv)`, we then find that `bitvset.contains(i)` can return true when `i` should not in fact be in the set.
```
let bs = BitvSet::from_bitv(Bitv::new(100, true));
assert!(!bs.contains(&127)) //fails
```
The fix is to create the correct representation by treating various cases separately and using a bitshift `(1<<nbits) - 1` to generate correct number of `1`s where necessary.
I found the boxes diagram in the tutorial misleading about how the enum worked.
The current diagram makes it seem that there is a separate Cons struct when there is only one type of struct for the List type, and Nil is drawn almost as if it's not consuming memory.
I'm aware of the optimization that happens for this enum which takes advantage of the fact that pointer cannot be null but this is an implementation detail and not the only one that applies here. I can add a note below the diagram mentioning this if you like.
This makes pretty print tests that have aux crates work correctly on Android.
Without they generate errors ICEs about incorrect node ids. Not sure why.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one
being removed:
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)