For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
As part of #10387, this removes the `Primitive::{bits, bytes, is_signed}` methods and removes the trait's operator trait constraints for the reasons outlined below:
- The `Primitive::{bits, bytes}` associated functions were originally added to reflect the existing `BITS` and `BYTES`statics included in the numeric modules. These statics are only exist as a workaround for Rust's lack of CTFE, and should be deprecated in the future in favor of using the `std::mem::size_of` function (see #11621).
- `Primitive::is_signed` seems to be of little utility and does not seem to be used anywhere in the Rust compiler or libraries. It is also rather ugly to call due to the `Option<Self>` workaround for #8888.
- The operator trait constraints are already covered by the `Num` trait.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
The patch adds the missing pow method for all the implementations of the
Integer trait. This is a small addition that will most likely be
improved by the work happening in #10387.
Fixes#11499
This stores the stack of iterators inline (we have a maximum depth with
`uint` keys), and then uses direct pointer offsetting to manipulate it,
in a blazing fast way:
Before:
bench_iter_large ... bench: 43187 ns/iter (+/- 3082)
bench_iter_small ... bench: 618 ns/iter (+/- 288)
After:
bench_iter_large ... bench: 13497 ns/iter (+/- 1575)
bench_iter_small ... bench: 220 ns/iter (+/- 91)
Also, removes `.each_{key,value}_reverse` as an offering to
placate the gods of external iterators for my heinous sin of
attempting to add new internal ones (in a previous version of this
PR).
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
This stores the stack of iterators inline (we have a maximum depth with
`uint` keys), and then uses direct pointer offsetting to manipulate it,
in a blazing fast way:
Before:
bench_iter_large ... bench: 43187 ns/iter (+/- 3082)
bench_iter_small ... bench: 618 ns/iter (+/- 288)
After:
bench_iter_large ... bench: 13497 ns/iter (+/- 1575)
bench_iter_small ... bench: 220 ns/iter (+/- 91)
This removes the `Primitive::{bits, bytes, is_signed}` methods and removes the operator trait constraints, for the reasons outlined below:
- The `Primitive::{bits, bytes}` associated functions were originally added to reflect the existing `BITS` and `BYTES` statics included in the numeric modules. These statics are only exist as a workaround for Rust's lack of CTFE, and should probably be deprecated in the future in favor of using the `std::mem::size_of` function (see #11621).
- `Primitive::is_signed` seems to be of little utility and does not seem to be used anywhere in the Rust compiler or libraries. It is also rather ugly to call due to the `Option<Self>` workaround for #8888.
- The operator trait constraints are already covered by the `Num` trait.
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
cc #11119
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
This means that compilation continues for longer, and so we can see more
errors per compile. This is mildly more user-friendly because it stops
users having to run rustc n times to see n macro errors: just run it
once to see all of them.
The patch adds a `pow` function for types implementing `One`, `Mul` and
`Clone` trait.
The patch also renames f32 and f64 pow into powf in order to still have
a way to easily have float powers. It uses llvms intrinsics.
The pow implementation for all num types uses the exponentiation by
square.
Fixes bug #11499
too.
Previously I had omitted this case since function calls don't get the same
treatment on the RHS, but it's different on the pattern and is more consistent
-- the goal is to identify `let` statements where `ref` bindings create
interior pointers.
The test run summary currently prints the wrong number of tests run. This PR fixes it by adding a newline to the log output, and also adds support for counting bench runs.
Closes#11381
Use a lookup table, SHIFT_MASK_TABLE, that for every possible four
bit prefix holds the number of times the value should be right shifted and what
the right shifted value should be masked with. This way we can get rid of the
branches which in my testing gives approximately a 2x speedup.
Timings on Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz
-- Before --
running 5 tests
test ebml::tests::test_vuint_at ... ok
test ebml::bench::vuint_at_A_aligned ... bench: 494 ns/iter (+/- 3)
test ebml::bench::vuint_at_A_unaligned ... bench: 494 ns/iter (+/- 4)
test ebml::bench::vuint_at_D_aligned ... bench: 467 ns/iter (+/- 5)
test ebml::bench::vuint_at_D_unaligned ... bench: 467 ns/iter (+/- 5)
-- After --
running 5 tests
test ebml::tests::test_vuint_at ... ok
test ebml::bench::vuint_at_A_aligned ... bench: 181 ns/iter (+/- 2)
test ebml::bench::vuint_at_A_unaligned ... bench: 192 ns/iter (+/- 1)
test ebml::bench::vuint_at_D_aligned ... bench: 181 ns/iter (+/- 3)
test ebml::bench::vuint_at_D_unaligned ... bench: 197 ns/iter (+/- 6)