Hi,
As noted in #6804, a pattern that contains `NaN` will never match because `NaN != NaN`. This adds a warning for such a case. The first commit handles the basic case and the second one generalizes it to more complex patterns using `walk_pat`.
Until now, we only optimized away impossible branches when there is a
literal true/false in the code. But since the LLVM IR builder already does
constant folding for us, we can trivially expand that to work with
constants as well.
Refs #7834
Infers type of constants used as discriminants and ensures they are
integral, instead of forcing them to be a signed integer.
Also, stores discriminant values as uint instead of int interally and
deals with related fallout.
Fixes issue #7994
This is a cleanup pull request that does:
* removes `os::as_c_charp`
* moves `str::as_buf` and `str::as_c_str` into `StrSlice`
* converts some functions from `StrSlice::as_buf` to `StrSlice::as_c_str`
* renames `StrSlice::as_buf` to `StrSlice::as_imm_buf` (and adds `StrSlice::as_mut_buf` to match `vec.rs`.
* renames `UniqueStr::as_bytes_with_null_consume` to `UniqueStr::to_bytes`
* and other misc cleanups and minor optimizations
The code to build the transmute intrinsic currently makes the invalid
assumption that if the in-type is non-immediate, the out-type is
non-immediate as well. But this is wrong, for example when transmuting
[int, ..1] to int. So we need to handle this fourth case as well.
Fixes#7988
This allows for control over the section placement of static, static
mut, and fn items. One caveat is that if a static and a static mut are
placed in the same section, the static is declared first, and the static
mut is assigned to, the generated program crashes. For example:
#[link_section=".boot"]
static foo : uint = 0xdeadbeef;
#[link_section=".boot"]
static mut bar : uint = 0xcafebabe;
Declaring bar first would mark .bootdata as writable, preventing the
crash when bar is written to.
Improve vtable resolution in a handful of ways. First, if we don't
find a vtable for a self/param type, do a regular vtable search. This
could find impls of the form "impl for A". Second, we don't require
that types be fully resolved before looking up subtables, and we
process tables in reverse order. This allows us to gain more
information about early type parameters based on how they are used by
the impls used to resolve later params.
Closes#6967, I believe.
This allows for control over the section placement of static, static
mut, and fn items. One caveat is that if a static and a static mut are
placed in the same section, the static is declared first, and the static
mut is assigned to, the generated program crashes. For example:
#[link_section=".boot"]
static foo : uint = 0xdeadbeef;
#[link_section=".boot"]
static mut bar : uint = 0xcafebabe;
Declaring bar first would mark .bootdata as writable, preventing the
crash when bar is written to.
Continuation of https://github.com/mozilla/rust/pull/7826.
AST spanned<T> refactoring, AST type renamings:
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
`field => Field`
Also, Crate, Field and Local are not wrapped in spanned<T> anymore.
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
Also, Crate and Local are not wrapped in spanned<T> anymore.
These changes remove unnecessary basic blocks and the associated branches from
the LLVM IR that we emit. Together, they reduce the time for unoptimized builds
in stage2 by about 10% on my box.
These blocks were required because previously we could only insert
instructions at the end of blocks, but we wanted to have all allocas in
one place, so they can be collapse. But now we have "direct" access the
the LLVM IR builder and can position it freely. This allows us to use
the same trick that clang uses, which means that we insert a dummy
"marker" instruction to identify the spot at which we want to insert
allocas. We can then later position the IR builder at that spot and
insert the alloca instruction, without any dedicated block.
The block for loading the closure environment can now also go away,
because the function context now provides the toplevel block, and the
translation of the loading happens first, so that's good enough.
Makes the LLVM IR a bit more readable, saving a bunch of branches in the
unoptimized code, which benefits unoptimized builds.
Currently, the helper functions in the "build" module can only append
at the end of a block. For certain things we'll want to be able to
insert code at arbitrary locations inside a block though. Although can
we do that by directly calling the LLVM functions, that is rather ugly
and means that somethings need to be implemented twice. Once in terms
of the helper functions and once in terms of low level LLVM functions.
Instead of doing that, we should provide a Builder type that provides
low level access to the builder, and which can be used by both, the
helper functions in the "build" module, as well larger units of
abstractions that combine several LLVM instructions.
Currently, all closures have an llenv block to load values from the
captured environment, but for closure that don't actually capture
anything, that block is useless and can be skipped.
This does a number of things, but especially dramatically reduce the
number of allocations performed for operations involving attributes/
meta items:
- Converts ast::meta_item & ast::attribute and other associated enums
to CamelCase.
- Converts several standalone functions in syntax::attr into methods,
defined on two traits AttrMetaMethods & AttributeMethods. The former
is common to both MetaItem and Attribute since the latter is a thin
wrapper around the former.
- Deletes functions that are unnecessary due to iterators.
- Converts other standalone functions to use iterators and the generic
AttrMetaMethods rather than allocating a lot of new vectors (e.g. the
old code would have to allocate a new vector to use functions that
operated on &[meta_item] on &[attribute].)
- Moves the core algorithm of the #[cfg] matching to syntax::attr,
similar to find_inline_attr and find_linkage_metas.
This doesn't have much of an effect on the speed of #[cfg] stripping,
despite hugely reducing the number of allocations performed; presumably
most of the time is spent in the ast folder rather than doing attribute
checks.
Also fixes the Eq instance of MetaItem_ to correctly ignore spans, so
that `rustc --cfg 'foo(bar)'` now works.