The following are renamed:
* `min_value` => `MIN`
* `max_value` => `MAX`
* `bits` => `BITS`
* `bytes` => `BYTES`
All tests pass, except for `run-pass/phase-syntax-link-does-resolve.rs`. I doubt that failure is related, though.
Fixes#10010.
The race here happened when a port had its deschedule in select() canceled, but
the other chan had already been dropped. This meant that the DISCONNECTED case
was hit in abort_selection, but the to_wake cell hadn't been emptied yet (this
was done after aborting), causing an assert in abort_selection to trip.
To fix this, the to_wake cell is just emptied before abort_selection is called
(we know that we're the owner of it already).
They all have to go into a single module at the moment unfortunately.
Ideally, the logging macros would live in std::logging, condition! would
live in std::condition, format! in std::fmt, etc. However, this
introduces cyclic dependencies between those modules and the macros they
use which the current expansion system can't deal with. We may be able
to get around this by changing the expansion phase to a two-pass system
but that's for a later PR.
Closes#2247
cc #11763
This test is designed to ensure that running a non-existent executable
results in a correct error message (FileNotFound in this case of this
test). However, if you try to run an executable that doesn't exist, and
that requires searching through the $PATH, and one of the $PATH components
is not readable, then a PermissionDenied error will be returned, instead
of FileNotFound.
By default, the compiler and libraries are all still built with rpaths, but this
can be opted out of with --disable-rpath to ./configure or --no-rpath to rustc.
Closes#5219
The race here happened when a port had its deschedule in select() canceled, but
the other chan had already been dropped. This meant that the DISCONNECTED case
was hit in abort_selection, but the to_wake cell hadn't been emptied yet (this
was done after aborting), causing an assert in abort_selection to trip.
To fix this, the to_wake cell is just emptied before abort_selection is called
(we know that we're the owner of it already).
By default, the compiler and libraries are all still built with rpaths, but this
can be opted out of with --disable-rpath to ./configure or --no-rpath to rustc.
cc #5219
This change adds two improvements to docs searching functionality.
First, search results will immediately be displayed when a ?search=searchterm
query string parameter is provided to any docs url.
Second, search results are now inserted into the browser history, allowing for
easier navigation between search results and docs pages.
They all have to go into a single module at the moment unfortunately.
Ideally, the logging macros would live in std::logging, condition! would
live in std::condition, format! in std::fmt, etc. However, this
introduces cyclic dependencies between those modules and the macros they
use which the current expansion system can't deal with. We may be able
to get around this by changing the expansion phase to a two-pass system
but that's for a later PR.
Closes#2247
cc #11763
Before this commit, rustc looked in `dirname $0`/../lib for libraries
but that doesn't work when rustc is invoked through a symlink.
This commit makes rustc look in `dirname $(readlink $0)`/../lib, i.e.
it first canonicalizes the symlink before walking up the directory tree.
Fixes#3632.
On my Gentoo Linux machine, the c-dynamic-dylib test is failing, because libcfoo can't be found. bar has a correct rpath for finding libfoo.so, but libfoo.so's rpath doesn't contain the right entries for finding libcfoo.
Below is the test failure on my machine. This test pass with this commit:
```
maketest: c-dynamic-dylib
----- /storage/home/achin/devel/rust/src/test/run-make/c-dynamic-dylib/ --------------------
------ stdout ---------------------------------------------
make[1]: Entering directory `/storage/home/achin/devel/rust/src/test/run-make/c-dynamic-dylib'
gcc -Wall -Werror -g -fPIC -m64 -L /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib -c -o /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib/libcfoo.o cfoo.c
gcc -Wall -Werror -g -fPIC -m64 -L /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib -o /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib/libcfoo.so /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib/libcfoo.o -shared
/storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/stage2/bin/rustc --out-dir /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib -L /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib foo.rs
/storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/stage2/bin/rustc --out-dir /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib -L /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib bar.rs
/storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib/bar
rm /storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib/libcfoo.o
make[1]: Leaving directory `/storage/home/achin/devel/rust/src/test/run-make/c-dynamic-dylib'
------ stderr ---------------------------------------------
/storage/home/achin/devel/rust/x86_64-unknown-linux-gnu/test/run-make/c-dynamic-dylib/bar: error while loading shared libraries: libcfoo.so: cannot open shared object file: No such file or directory
make[1]: *** [all] Error 127
------ ---------------------------------------------
```
The old method of serializing the AST gives totally bogus spans if the
expansion of an imported macro causes compilation errors. The best
solution seems to be to serialize the actual textual macro definition
and load it the same way the std-macros are. I'm not totally confident
that getting the source from the CodeMap will always do the right thing,
but it seems to work in simple cases.
A mutable and immutable borrow place some restrictions on what you can
with the variable until the borrow ends. This commit attempts to convey
to the user what those restrictions are. Also, if the original borrow is
a mutable borrow, the error message has been changed (more specifically,
i. "cannot borrow `x` as immutable because it is also borrowed as
mutable" and ii. "cannot borrow `x` as mutable more than once" have
been changed to "cannot borrow `x` because it is already borrowed as
mutable").
In addition, this adds a (custom) span note to communicate where the
original borrow ends.
```rust
fn main() {
match true {
true => {
let mut x = 1;
let y = &x;
let z = &mut x;
}
false => ()
}
}
test.rs:6:21: 6:27 error: cannot borrow `x` as mutable because it is already borrowed as immutable
test.rs:6 let z = &mut x;
^~~~~~
test.rs:5:21: 5:23 note: previous borrow of `x` occurs here; the immutable borrow prevents subsequent moves or mutable borrows of `x` until the borrow ends
test.rs:5 let y = &x;
^~
test.rs:7:10: 7:10 note: previous borrow ends here
test.rs:3 true => {
test.rs:4 let mut x = 1;
test.rs:5 let y = &x;
test.rs:6 let z = &mut x;
test.rs:7 }
^
```
```rust
fn foo3(t0: &mut &mut int) {
let t1 = &mut *t0;
let p: &int = &**t0;
}
fn main() {}
test.rs:3:19: 3:24 error: cannot borrow `**t0` because it is already borrowed as mutable
test.rs:3 let p: &int = &**t0;
^~~~~
test.rs:2:14: 2:22 note: previous borrow of `**t0` as mutable occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `**t0` until the borrow ends
test.rs:2 let t1 = &mut *t0;
^~~~~~~~
test.rs:4:2: 4:2 note: previous borrow ends here
test.rs:1 fn foo3(t0: &mut &mut int) {
test.rs:2 let t1 = &mut *t0;
test.rs:3 let p: &int = &**t0;
test.rs:4 }
^
```
For the "previous borrow ends here" note, if the span is too long (has too many lines), then only the first and last lines are printed, and the middle is replaced with dot dot dot:
```rust
fn foo3(t0: &mut &mut int) {
let t1 = &mut *t0;
let p: &int = &**t0;
}
fn main() {}
test.rs:3:19: 3:24 error: cannot borrow `**t0` because it is already borrowed as mutable
test.rs:3 let p: &int = &**t0;
^~~~~
test.rs:2:14: 2:22 note: previous borrow of `**t0` as mutable occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `**t0` until the borrow ends
test.rs:2 let t1 = &mut *t0;
^~~~~~~~
test.rs:7:2: 7:2 note: previous borrow ends here
test.rs:1 fn foo3(t0: &mut &mut int) {
...
test.rs:7 }
^
```
(Sidenote: the `span_end_note` currently also has issue #11715)
Renamed the ```invert()``` function in ```iter.rs``` to ```flip()```, from #10632
Also renamed the ```Invert<T>``` type to ```Flip<T>```.
Some related code comments changed. Documentation that I could find has
been updated, and all the instances I could locate where the
function/type were called have been updated as well.
This is my first contribution to Rust! Apologies in advance if I've snarfed the
PR process, I'm not used to rebase.
I initially had issues with the ```codegen``` section of the tests failing, however
the ```make check``` process is not reporting any failures at this time. I think
that was a local env issue more than me facerolling my changes. :)
This patchset consists of three parts:
- rustpkg doesn't guess crate version if it is not given by user.
- `rustpkg::version::Version` is replaced by `Option<~str>`.
It removes some semantic versioning portions which is not currently used.
(cc #8405 and #11396)
`rustpkg::crate_id::CrateId` is also replaced by `syntax::crateid::CrateId`.
- rustpkg now computes hash to find crate, instead of manual filename parse.
cc @metajack
This change adds two improvements to docs searching functionality.
First, search results will immediately be displayed when a ?search=searchterm
query string parameter is provided to any docs url.
Second, search results are now inserted into the browser history, allowing for
easier navigation between search results and docs pages.
Renamed the invert() function in iter.rs to flip().
Also renamed the Invert<T> type to Flip<T>.
Some related code comments changed. Documentation that I could find has
been updated, and all the instances I could locate where the
function/type were called have been updated as well.
A mutable and immutable borrow place some restrictions on what you can
with the variable until the borrow ends. This commit attempts to convey
to the user what those restrictions are. Also, if the original borrow is
a mutable borrow, the error message has been changed (more specifically,
i. "cannot borrow `x` as immutable because it is also borrowed as
mutable" and ii. "cannot borrow `x` as mutable more than once" have
been changed to "cannot borrow `x` because it is already borrowed as
mutable").
In addition, this adds a (custom) span note to communicate where the
original borrow ends.
The old method of serializing the AST gives totally bogus spans if the
expansion of an imported macro causes compilation errors. The best
solution seems to be to serialize the actual textual macro definition
and load it the same way the std-macros are. I'm not totally confident
that getting the source from the CodeMap will always do the right thing,
but it seems to work in simple cases.
Fixes the following error when executing `make check-lite`:
Traceback (most recent call last):
File "/home/bnoordhuis/src/rust/src/etc/check-summary.py", line 27, in <module>
map(summarise, logfiles)
File "/home/bnoordhuis/src/rust/src/etc/check-summary.py", line 10, in summarise
with open(fname) as fd:
IOError: [Errno 2] No such file or directory: 'tmp/*.log'
Before this commit, rustc looked in `dirname $0`/../lib for libraries
but that doesn't work when rustc is invoked through a symlink.
This commit makes rustc look in `dirname $(readlink $0)`/../lib, i.e.
it first canonicalizes the symlink before walking up the directory tree.
Fixes#3632.
When there is `println!` macro in the code, compiling is never end.
```rust
// print.rs
fn main() {
println!("Hello!");
}
```
```bash
$ RUST_LOG=rustc rustc print.rs
```
And this is a part of output from stderr.
```bash
# ...
Looking up syntax::ast::DefId{crate: 1u32, node: 176234u32}
looking up syntax::ast::DefId{crate: 1u32, node: 176235u32} : extra::ebml::Doc<>{data: &[168u8, 16u8, 0u8, 0u8, 16u8, 51u8, 101u8, 53u8, 97u8, 101u8, 98u8, 56u8, 51u8, 55u8, 97u8, 101u8, 49u8, 54u8, 50u8
# ...
# vector which has infinite length.
```
* note : rust 0.9, 0.10-pre
This is just an initial implementation and does not yet fully replace `~[T]`. A generic initialization syntax for containers is missing, and the slice functionality needs to be reworked to make auto-slicing unnecessary.
Traits for supporting indexing properly are also required. This also needs to be fixed to make ring buffers as easy to use as vectors.
The tests and documentation for `~[T]` can be ported over to this type when it is removed. I don't really expect DST to happen for vectors as having both `~[T]` and `Vec<T>` is overcomplicated and changing the slice representation to 3 words is not at all appealing. Unlike with traits, it's possible (and easy) to implement `RcSlice<T>` and `GcSlice<T>` without compiler help.
Fixes#6593
Currently, Rust provides no way to print very large or very small floating point values which come up routinely in scientific and modeling work. The classical solution to this is to use the scientific/exponential notation, which not-coincidentally, corresponds to how floating point values are encoded in memory. Given this, there are two solutions to the problem. One is what, as far as I understand it, Python does. I.e. for floating point numbers in a certain range it does what we do today with the `'f'` formatting flag, otherwise it switches to exponential notation. The other way is to provide a set of formatting flags to explicitly choose the exponential notation, like it is done in C. I've chosen the second way as I think its important to provide that kind of control to the user.
This pull request changes the `std::num::strconv::float_to_str_common` function to optionally format floating point numbers using the exponential (scientific) notation. The base of the significant can be varied between 2 and 25, while the base of the exponent can be 2 or 10.
Additionally this adds two new formatting specifiers to `format!` and friends: `'e'` and `'E'` which switch between outputs like `1.0e5` and `1.0E5`. Mostly parroting C stdlib in this sense, although I wasn't going for an exact output match.
Native timers are a much hairier thing to deal with than green timers due to the
interface that we would like to expose (both a blocking sleep() and a
channel-based interface). I ended up implementing timers in three different ways
for the various platforms that we supports.
In all three of the implementations, there is a worker thread which does send()s
on channels for timers. This worker thread is initialized once and then
communicated to in a platform-specific manner, but there's always a shared
channel available for sending messages to the worker thread.
* Windows - I decided to use windows kernel timer objects via
CreateWaitableTimer and SetWaitableTimer in order to provide sleeping
capabilities. The worker thread blocks via WaitForMultipleObjects where one of
the objects is an event that is used to wake up the helper thread (which then
drains the incoming message channel for requests).
* Linux/(Android?) - These have the ideal interface for implementing timers,
timerfd_create. Each timer corresponds to a timerfd, and the helper thread
uses epoll to wait for all active timers and then send() for the next one that
wakes up. The tricky part in this implementation is updating a timerfd, but
see the implementation for the fun details
* OSX/FreeBSD - These obviously don't have the windows APIs, and sadly don't
have the timerfd api available to them, so I have thrown together a solution
which uses select() plus a timeout in order to ad-hoc-ly implement a timer
solution for threads. The implementation is backed by a sorted array of timers
which need to fire. As I said, this is an ad-hoc solution which is certainly
not accurate timing-wise. I have done this implementation due to the lack of
other primitives to provide an implementation, and I've done it the best that
I could, but I'm sure that there's room for improvement.
I'm pretty happy with how these implementations turned out. In theory we could
drop the timerfd implementation and have linux use the select() + timeout
implementation, but it's so inaccurate that I would much rather continue to use
timerfd rather than my ad-hoc select() implementation.
The only change that I would make to the API in general is to have a generic
sleep() method on an IoFactory which doesn't require allocating a Timer object.
For everything but windows it's super-cheap to request a blocking sleep for a
set amount of time, and it's probably worth it to provide a sleep() which
doesn't do something like allocate a file descriptor on linux.
This routine is currently only used to clean up the timer helper thread in the
libnative implementation, but there are possibly other uses for this.
The documentation is clear that the procedures are *not* run with any task
context and hence have very little available to them. I also opted to disallow
at_exit inside of at_exit and just abort the process at that point.
I've started working on a patch for #7313 . So far I tried to replace C types in `src/libstd/unstable/*` and related files.
So far, I have two questions. Is there a convention for passing pointers around in `std` as Rust types? Sometimes pointers are passed around as `*c_char` (which seems to be an `*i8`), `*c_void` or `*u8`, which leads to a lot of casts. E.g: [`exchange_malloc`](https://github.com/fhahn/rust/compare/issue-7313-replace-c-types?expand=1#diff-39f44b8c3f4496abab854b3425ac1617R60) used to return a `*c_char` but the function in turn only calls `malloc_raw` which returns a `*c_void`.
Is there a specific reason for this?
The second question is about `CString` and related functions like `with_c_str`. At the moment these functions use `*c_char*`. Should I replace it with `*u8` or keep it, because it's an wrapper around classical C strings?
Previously rustpkg tried to parse filenames to find crate. Now ue use
deterministic hashes, so it becomes possible to directly construct
filename and check if the file exists.
rustpkg accessed git repo to read tags and guess package version,
but it's not quite useful: version can be given explicitly by user,
and implicit guess may cause confusions.
[On 2013-12-06, I wrote to the rust-dev mailing list](https://mail.mozilla.org/pipermail/rust-dev/2013-December/007263.html):
> Subject: Let’s avoid having both foo() and foo_opt()
>
> We have some functions and methods such as [std::str::from_utf8](http://static.rust-lang.org/doc/master/std/str/fn.from_utf8.html) that may succeed and give a result, or fail when the input is invalid.
>
> 1. Sometimes we assume the input is valid and don’t want to deal with the error case. Task failure works nicely.
>
> 2. Sometimes we do want to do something different on invalid input, so returning an `Option<T>` works best.
>
> And so we end up with both `from_utf8` and `from_utf8`. This particular case is worse because we also have `from_utf8_owned` and `from_utf8_owned_opt`, to cover everything.
>
> Multiplying names like this is just not good design. I’d like to reduce this pattern.
>
> Getting behavior 1. when you have 2. is easy: just call `.unwrap()` on the Option. I think we should rename every `foo_opt()` function or method to just `foo`, remove the old `foo()` behavior, and tell people (through documentation) to use `foo().unwrap()` if they want it back?
>
> The downsides are that unwrap is more verbose and gives less helpful error messages on task failure. But I think it’s worth it.
The email discussion has gone around long enough. Let’s discuss a concrete proposal. For the following functions or methods, I removed `foo` (that caused task failure) and renamed `foo_opt` (that returns `Option`) to just `foo`.
Vector methods:
* `get_opt` (rename only, `get` did not exist as it would have been just `[]`)
* `head_opt`
* `last_opt`
* `pop_opt`
* `shift_opt`
* `remove_opt`
`std::path::BytesContainer` method:
* `container_as_str_opt`
`std::str` functions:
* `from_utf8_opt`
* `from_utf8_owned_opt` (also remove the now unused `not_utf8` condition)
Is there something else that should recieve the same treatement?
I did not rename `recv_opt` on channels based on @brson’s [feedback](https://mail.mozilla.org/pipermail/rust-dev/2013-December/007270.html).
Feel free to pick only some of these commits.
To build for the cortex-M series ARM processors LLC needs to be told to build for the thumb instruction set. There are two ways to do this, either with the triple "thumb\*-\*-\*" or with -march=thumb (which just overrides the triple anyway). I chose the first way.
The following will fail because the local cc doesn't know what to do with -mthumb.
````
rustc test.rs --lib --target thumb-linux-eab
error: linking with `cc` failed: exit code: 1
note: cc: error: unrecognized command line option ‘-mthumb’
````
Changing the linker works as expected.
````
rustc test.rs --lib --target thumb-linux-eabi --linker arm-none-eabi-gcc
````
Ideally I'd have the triple thumb-none-eabi, but adding a new OS looks like much more work (and I'm not familiar enough with what it does to know if it is needed).
* Stop using hardcoded numbers that have to all get updated when something changes (inevitable errors and rebase conflicts) as well as removes some unneeded -Z options (obsoleted over time).
* Remove `std::rt::borrowck`
The included test case would essentially never finish compiling without this
patch. It recursies twice at every ExprParen meaning that the branching factor
is 2^n
The included test case will take so long to parse on the old compiler that it'll
surely never let this crop up again.
The included test case would essentially never finish compiling without this
patch. It recursies twice at every ExprParen meaning that the branching factor
is 2^n
The included test case will take so long to parse on the old compiler that it'll
surely never let this crop up again.
`Zero` and `One` have precise definitions in mathematics as the identities of the `Add` and `Mul` operations respectively. As such, types that implement these identities are now also required to implement their respective operator traits. This should reduce their misuse whilst still enabling them to be used in generalized algebraic structures (not just numbers). Existing usages of `#[deriving(Zero)]` in client code could break under these new rules, but this is probably a sign that they should have been using something like `#[deriving(Default)]` in the first place.
For more information regarding the mathematical definitions of the additive and multiplicative identities, see the following Wikipedia articles:
- http://wikipedia.org/wiki/Additive_identity
- http://wikipedia.org/wiki/Multiplicative_identity
Note that for floating point numbers the laws specified in the doc comments of `Zero::zero` and `One::one` may not always hold. This is true however for many other traits currently implemented by floating point numbers. What traits floating point numbers should and should not implement is an open question that is beyond the scope of this pull request.
The implementation of `std::num::pow` has been made more succinct and no longer requires `Clone`. The coverage of the associated unit test has also been increased to test for more combinations of bases, exponents, and expected results.
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
The implementation has been made more succinct and no longer requires Clone. The coverage of the associated unit test has also been increased to check more combinations of bases, exponents, and expected results.
Ignore all newline characters in Base64 decoder to make it compatible with other Base64 decoders.
Most of the Base64 decoder implementations ignore all newline characters in the input string. There are some examples:
Python:
```python
>>> "
A
Q
=
=
".decode("base64")
'\x01'
```
Ruby:
```ruby
irb(main):001:0> "
A
Q
=
=
".unpack("m")
=> [""]
```
Erlang:
```erlang
1> base64:decode("
A
Q
=
=
").
<<1>>
```
Moreover some Base64 encoders append newline character at the end of the output string by default:
Python:
```python
>>> "".encode("base64")
'AQ==
'
```
Ruby:
```ruby
irb(main):001:0> [""].pack("m")
=> "AQ==
"
```
So I think it's fairly important for Rust Base64 decoder to accept Base64 inputs even with newline characters in the padding.
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one being removed:
```
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)
```
I update the example of json use to the last update of the json.rs code. I delete the old branch.
From my last request, I remove the example3 because it doesn't compile. I don't understand why and I don't have the time now to investigate.
This means that compilation continues for longer, and so we can see more
errors per compile. This is mildly more user-friendly because it stops
users having to run rustc n times to see n macro errors: just run it
once to see all of them.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This is my first patch so feedback appreciated!
Bug when initialising `bitv:Bitv::new(int,bool)` when `bool=true`. It created a `Bitv` with underlying representation `!0u` rather than the actual desired bit layout ( e.g. `11111111` instead of `00001111`). This works OK because a size attribute is included which keeps access to legal bounds. However when using `BitvSet::from_bitv(Bitv)`, we then find that `bitvset.contains(i)` can return true when `i` should not in fact be in the set.
```
let bs = BitvSet::from_bitv(Bitv::new(100, true));
assert!(!bs.contains(&127)) //fails
```
The fix is to create the correct representation by treating various cases separately and using a bitshift `(1<<nbits) - 1` to generate correct number of `1`s where necessary.
This makes pretty print tests that have aux crates work correctly on Android.
Without they generate errors ICEs about incorrect node ids. Not sure why.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one
being removed:
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
As part of #10387, this removes the `Primitive::{bits, bytes, is_signed}` methods and removes the trait's operator trait constraints for the reasons outlined below:
- The `Primitive::{bits, bytes}` associated functions were originally added to reflect the existing `BITS` and `BYTES`statics included in the numeric modules. These statics are only exist as a workaround for Rust's lack of CTFE, and should be deprecated in the future in favor of using the `std::mem::size_of` function (see #11621).
- `Primitive::is_signed` seems to be of little utility and does not seem to be used anywhere in the Rust compiler or libraries. It is also rather ugly to call due to the `Option<Self>` workaround for #8888.
- The operator trait constraints are already covered by the `Num` trait.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
The patch adds the missing pow method for all the implementations of the
Integer trait. This is a small addition that will most likely be
improved by the work happening in #10387.
Fixes#11499
This stores the stack of iterators inline (we have a maximum depth with
`uint` keys), and then uses direct pointer offsetting to manipulate it,
in a blazing fast way:
Before:
bench_iter_large ... bench: 43187 ns/iter (+/- 3082)
bench_iter_small ... bench: 618 ns/iter (+/- 288)
After:
bench_iter_large ... bench: 13497 ns/iter (+/- 1575)
bench_iter_small ... bench: 220 ns/iter (+/- 91)
Also, removes `.each_{key,value}_reverse` as an offering to
placate the gods of external iterators for my heinous sin of
attempting to add new internal ones (in a previous version of this
PR).
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
This stores the stack of iterators inline (we have a maximum depth with
`uint` keys), and then uses direct pointer offsetting to manipulate it,
in a blazing fast way:
Before:
bench_iter_large ... bench: 43187 ns/iter (+/- 3082)
bench_iter_small ... bench: 618 ns/iter (+/- 288)
After:
bench_iter_large ... bench: 13497 ns/iter (+/- 1575)
bench_iter_small ... bench: 220 ns/iter (+/- 91)
This removes the `Primitive::{bits, bytes, is_signed}` methods and removes the operator trait constraints, for the reasons outlined below:
- The `Primitive::{bits, bytes}` associated functions were originally added to reflect the existing `BITS` and `BYTES` statics included in the numeric modules. These statics are only exist as a workaround for Rust's lack of CTFE, and should probably be deprecated in the future in favor of using the `std::mem::size_of` function (see #11621).
- `Primitive::is_signed` seems to be of little utility and does not seem to be used anywhere in the Rust compiler or libraries. It is also rather ugly to call due to the `Option<Self>` workaround for #8888.
- The operator trait constraints are already covered by the `Num` trait.
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
cc #11119
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
This means that compilation continues for longer, and so we can see more
errors per compile. This is mildly more user-friendly because it stops
users having to run rustc n times to see n macro errors: just run it
once to see all of them.
The patch adds a `pow` function for types implementing `One`, `Mul` and
`Clone` trait.
The patch also renames f32 and f64 pow into powf in order to still have
a way to easily have float powers. It uses llvms intrinsics.
The pow implementation for all num types uses the exponentiation by
square.
Fixes bug #11499
too.
Previously I had omitted this case since function calls don't get the same
treatment on the RHS, but it's different on the pattern and is more consistent
-- the goal is to identify `let` statements where `ref` bindings create
interior pointers.
The test run summary currently prints the wrong number of tests run. This PR fixes it by adding a newline to the log output, and also adds support for counting bench runs.
Closes#11381
Use a lookup table, SHIFT_MASK_TABLE, that for every possible four
bit prefix holds the number of times the value should be right shifted and what
the right shifted value should be masked with. This way we can get rid of the
branches which in my testing gives approximately a 2x speedup.
Timings on Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz
-- Before --
running 5 tests
test ebml::tests::test_vuint_at ... ok
test ebml::bench::vuint_at_A_aligned ... bench: 494 ns/iter (+/- 3)
test ebml::bench::vuint_at_A_unaligned ... bench: 494 ns/iter (+/- 4)
test ebml::bench::vuint_at_D_aligned ... bench: 467 ns/iter (+/- 5)
test ebml::bench::vuint_at_D_unaligned ... bench: 467 ns/iter (+/- 5)
-- After --
running 5 tests
test ebml::tests::test_vuint_at ... ok
test ebml::bench::vuint_at_A_aligned ... bench: 181 ns/iter (+/- 2)
test ebml::bench::vuint_at_A_unaligned ... bench: 192 ns/iter (+/- 1)
test ebml::bench::vuint_at_D_aligned ... bench: 181 ns/iter (+/- 3)
test ebml::bench::vuint_at_D_unaligned ... bench: 197 ns/iter (+/- 6)
This is a first pass on support for procedural macros that aren't hardcoded into libsyntax. It is **not yet ready to merge** but I've opened a PR to have a chance to discuss some open questions and implementation issues.
Example
=======
Here's a silly example showing off the basics:
my_synext.rs
```rust
#[feature(managed_boxes, globs, macro_registrar, macro_rules)];
extern mod syntax;
use syntax::ast::{Name, token_tree};
use syntax::codemap::Span;
use syntax::ext::base::*;
use syntax::parse::token;
#[macro_export]
macro_rules! exported_macro (() => (2))
#[macro_registrar]
pub fn macro_registrar(register: |Name, SyntaxExtension|) {
register(token::intern(&"make_a_1"),
NormalTT(@SyntaxExpanderTT {
expander: SyntaxExpanderTTExpanderWithoutContext(expand_make_a_1),
span: None,
} as @SyntaxExpanderTTTrait,
None));
}
pub fn expand_make_a_1(cx: &mut ExtCtxt, sp: Span, tts: &[token_tree]) -> MacResult {
if !tts.is_empty() {
cx.span_fatal(sp, "make_a_1 takes no arguments");
}
MRExpr(quote_expr!(cx, 1i))
}
```
main.rs:
```rust
#[feature(phase)];
#[phase(syntax)]
extern mod my_synext;
fn main() {
assert_eq!(1, make_a_1!());
assert_eq!(2, exported_macro!());
}
```
Overview
=======
Crates that contain syntax extensions need to define a function with the following signature and annotation:
```rust
#[macro_registrar]
pub fn registrar(register: |ast::Name, ext::base::SyntaxExtension|) { ... }
```
that should call the `register` closure with each extension it defines. `macro_rules!` style macros can be tagged with `#[macro_export]` to be exported from the crate as well.
Crates that wish to use externally loadable syntax extensions load them by adding the `#[phase(syntax)]` attribute to an `extern mod`. All extensions registered by the specified crate are loaded with the same scoping rules as `macro_rules!` macros. If you want to use a crate both for syntax extensions and normal linkage, you can use `#[phase(syntax, link)]`.
Open questions
===========
* ~~Does the `macro_crate` syntax make sense? It wraps an entire `extern mod` declaration which looks a bit weird but is nice in the sense that the crate lookup logic can be identical between normal external crates and external macro crates. If the `extern mod` syntax, changes, this will get it for free, etc.~~ Changed to a `phase` attribute.
* ~~Is the magic name `macro_crate_registration` the right way to handle extension registration? It could alternatively be handled by a function annotated with `#[macro_registration]` I guess.~~ Switched to an attribute.
* The crate loading logic lives inside of librustc, which means that the syntax extension infrastructure can't directly access it. I've worked around this by passing a `CrateLoader` trait object from the driver to libsyntax that can call back into the crate loading logic. It should be possible to pull things apart enough that this isn't necessary anymore, but it will be an enormous refactoring project. I think we'll need to create a couple of new libraries: libsynext libmetadata/ty and libmiddle.
* Item decorator extensions can be loaded but the `deriving` decorator itself can't be extended so you'd need to do e.g. `#[deriving_MyTrait] #[deriving(Clone)]` instead of `#[deriving(MyTrait, Clone)]`. Is this something worth bothering with for now?
Remaining work
===========
- [x] ~~There is not yet support for rustdoc downloading and compiling referenced macro crates as it does for other referenced crates. This shouldn't be too hard I think.~~
- [x] ~~This is not testable at stage1 and sketchily testable at stages above that. The stage *n* rustc links against the stage *n-1* libsyntax and librustc. Unfortunately, crates in the test/auxiliary directory link against the stage *n* libstd, libextra, libsyntax, etc. This causes macro crates to fail to properly dynamically link into rustc since names end up being mangled slightly differently. In addition, when rustc is actually installed onto a system, there are actually do copies of libsyntax, libstd, etc: the ones that user code links against and a separate set from the previous stage that rustc itself uses. By this point in the bootstrap process, the two library versions *should probably* be binary compatible, but it doesn't seem like a sure thing. Fixing this is apparently hard, but necessary to properly cross compile as well and is being tracked in #11145.~~ The offending tests are ignored during `check-stage1-rpass` and `check-stage1-cfail`. When we get a snapshot that has this commit, I'll look into how feasible it'll be to get them working on stage1.
- [x] ~~`macro_rules!` style macros aren't being exported. Now that the crate loading infrastructure is there, this should just require serializing the AST of the macros into the crate metadata and yanking them out again, but I'm not very familiar with that part of the compiler.~~
- [x] ~~The `macro_crate_registration` function isn't type-checked when it's loaded. I poked around in the `csearch` infrastructure a bit but didn't find any super obvious ways of checking the type of an item with a certain name. Fixing this may also eliminate the need to `#[no_mangle]` the registration function.~~ Now that the registration function is identified by an attribute, typechecking this will be like typechecking other annotated functions.
- [x] ~~The dynamic libraries that are loaded are never unloaded. It shouldn't require too much work to tie the lifetime of the `DynamicLibrary` object to the `MapChain` that its extensions are loaded into.~~
- [x] ~~The compiler segfaults sometimes when loading external crates. The `DynamicLibrary` reference and code objects from that library are both put into the same hash table. When the table drops, due to the random ordering the library sometimes drops before the objects do. Once #11228 lands it'll be easy to fix this.~~