Updated all users of HashMap, HashSet old .consume() to use .consume()
with a for loop.
Since .consume() takes the map or set by value, it needs awkward
extra code to in librusti's use of @mut HashMap, where the map value can
not be directly moved out.
Note that this is not actually *used* by default; it is a matter of
configuration still, because you might want to:
- Compile all .rs files with `rustc %` (where each can be built itself)
- Compile all .rs files with `rustc some-file.rs` (where you are editing
part of a crate)
- Compile with a different tool, such as `make`. (In this case you might
put a `~/.vim/after/compiler/rustc.vim` to match such cases, set
makeprg and extend errorformat as appropriate. That should probably go
in a different compiler mode, e.g. make-rustc.)
To try using it, `:compiler rustc`. Then, `:make` on a file you would
run `rustc` on will work its magic, invoking rustc. To automate this,
you could have something like `autocmd FileType rust compiler rustc` in
your Vim config.
r? anyone
The only bit that I'm a little concerned about is whether there's some way the assignments to `hi` could somehow still be necessary; but I think that could only be the case if it had been `&const` borrowed (or whatever the hypothetical syntax is for that), and that's not going on in this file.
This should get us over the hump of activating basic ratcheting on codegen tests, at least. It also puts in place optional (disabled by default) ratcheting on all #[bench] tests, and records all metrics from them to harvestable .json files in any case.
It disables the insertion of `use std::prelude::*;` into the top of
all the modules below the item on which it is placed (including that
item itself).
(Similar to GHC's `-XNoImplicitPrelude`.)
This is the first of a series of refactorings to get rid of the `codemap::spanned<T>` struct (see this thread for more information: https://mail.mozilla.org/pipermail/rust-dev/2013-July/004798.html).
The changes in this PR should not change any semantics, just rename `ast::blk_` to `ast::blk` and add a span field to it. 95% of the changes were of the form `block.node.id` -> `block.id`. Only some transformations in `libsyntax::fold` where not entirely trivial.
Currently, our intrinsics are generated as functions that have the
usual setup, which means an alloca, and therefore also a jump, for
those intrinsics that return an immediate value. This is especially bad
for unoptimized builds because it means that an intrinsic like
"contains_managed" that should be just "ret 0" or "ret 1" actually ends
up allocating stack space, doing a jump and a store/load sequence
before it finally returns the value.
To fix that, we need a way to stop the generic function declaration
mechanism from allocating stack space for the return value. This
implicitly also kills the jump, because the block for static allocas
isn't required anymore.
Additionally, trans_intrinsic needs to build the return itself instead
of calling finish_fn, because the latter relies on the availability of
the return value pointer.
With these changes, we get the bare minimum code required for our
intrinsics, which makes them small enough that inlining them makes the
resulting code smaller, so we can mark them as "always inline" to get
better performing unoptimized builds.
Optimized builds also benefit slightly from this change as there's less
code for LLVM to translate and the smaller intrinsics help it to make
better inlining decisions for a few code paths.
Building stage2 librustc gets ~1% faster for the optimized version and 5% for
the unoptimized version.
Most arms of the huge match contain the same code, differing only in
small details like the name of the llvm intrinsic that is to be called.
Thus the duplicated code can be factored out into a few functions that
take some parameters to handle the differences.
Simulates borrow checks for '@mut' boxes, or at least it's the same idea. This allows you to store owned values, but mutate them while they're owned by TLS.
This should remove the necessity for a `pop`/`set` pattern to mutate data structures in TLS.