The included test case would essentially never finish compiling without this
patch. It recursies twice at every ExprParen meaning that the branching factor
is 2^n
The included test case will take so long to parse on the old compiler that it'll
surely never let this crop up again.
I wrote a chapter for the FFI tutorial that describes how to perform callbacks from C code to Rust and gives some hints about what to consider and synchronization.
I just needed that for my own wrapper and thought it would be helpful for others.
The included test case would essentially never finish compiling without this
patch. It recursies twice at every ExprParen meaning that the branching factor
is 2^n
The included test case will take so long to parse on the old compiler that it'll
surely never let this crop up again.
`Zero` and `One` have precise definitions in mathematics as the identities of the `Add` and `Mul` operations respectively. As such, types that implement these identities are now also required to implement their respective operator traits. This should reduce their misuse whilst still enabling them to be used in generalized algebraic structures (not just numbers). Existing usages of `#[deriving(Zero)]` in client code could break under these new rules, but this is probably a sign that they should have been using something like `#[deriving(Default)]` in the first place.
For more information regarding the mathematical definitions of the additive and multiplicative identities, see the following Wikipedia articles:
- http://wikipedia.org/wiki/Additive_identity
- http://wikipedia.org/wiki/Multiplicative_identity
Note that for floating point numbers the laws specified in the doc comments of `Zero::zero` and `One::one` may not always hold. This is true however for many other traits currently implemented by floating point numbers. What traits floating point numbers should and should not implement is an open question that is beyond the scope of this pull request.
The implementation of `std::num::pow` has been made more succinct and no longer requires `Clone`. The coverage of the associated unit test has also been increased to test for more combinations of bases, exponents, and expected results.
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
The implementation has been made more succinct and no longer requires Clone. The coverage of the associated unit test has also been increased to check more combinations of bases, exponents, and expected results.
Ignore all newline characters in Base64 decoder to make it compatible with other Base64 decoders.
Most of the Base64 decoder implementations ignore all newline characters in the input string. There are some examples:
Python:
```python
>>> "
A
Q
=
=
".decode("base64")
'\x01'
```
Ruby:
```ruby
irb(main):001:0> "
A
Q
=
=
".unpack("m")
=> [""]
```
Erlang:
```erlang
1> base64:decode("
A
Q
=
=
").
<<1>>
```
Moreover some Base64 encoders append newline character at the end of the output string by default:
Python:
```python
>>> "".encode("base64")
'AQ==
'
```
Ruby:
```ruby
irb(main):001:0> [""].pack("m")
=> "AQ==
"
```
So I think it's fairly important for Rust Base64 decoder to accept Base64 inputs even with newline characters in the padding.
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one being removed:
```
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)
```
I update the example of json use to the last update of the json.rs code. I delete the old branch.
From my last request, I remove the example3 because it doesn't compile. I don't understand why and I don't have the time now to investigate.
This means that compilation continues for longer, and so we can see more
errors per compile. This is mildly more user-friendly because it stops
users having to run rustc n times to see n macro errors: just run it
once to see all of them.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This is my first patch so feedback appreciated!
Bug when initialising `bitv:Bitv::new(int,bool)` when `bool=true`. It created a `Bitv` with underlying representation `!0u` rather than the actual desired bit layout ( e.g. `11111111` instead of `00001111`). This works OK because a size attribute is included which keeps access to legal bounds. However when using `BitvSet::from_bitv(Bitv)`, we then find that `bitvset.contains(i)` can return true when `i` should not in fact be in the set.
```
let bs = BitvSet::from_bitv(Bitv::new(100, true));
assert!(!bs.contains(&127)) //fails
```
The fix is to create the correct representation by treating various cases separately and using a bitshift `(1<<nbits) - 1` to generate correct number of `1`s where necessary.
I found the boxes diagram in the tutorial misleading about how the enum worked.
The current diagram makes it seem that there is a separate Cons struct when there is only one type of struct for the List type, and Nil is drawn almost as if it's not consuming memory.
I'm aware of the optimization that happens for this enum which takes advantage of the fact that pointer cannot be null but this is an implementation detail and not the only one that applies here. I can add a note below the diagram mentioning this if you like.
This makes pretty print tests that have aux crates work correctly on Android.
Without they generate errors ICEs about incorrect node ids. Not sure why.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.