This exposes volatile versions of the memset/memmove/memcpy intrinsics.
The volatile parameter must be constant, so this can't simply be a
parameter to our intrinsics.
This currently requires linking against a library like libquadmath (or
libgcc), because compiler-rt barely has any support for this and most
hardware does not yet have 128-bit precision floating point. For this
reason, it's currently hidden behind a feature gate.
When compiler-rt is updated to trunk, some tests can be added for
constant evaluation since there will be support for the comparison
operators.
Closes#13381
This exposes volatile versions of the memset/memmove/memcpy intrinsics.
The volatile parameter must be constant, so this can't simply be a
parameter to our intrinsics.
Refactores all uses of ty_vec and associated things to remove the vstore abstraction (still used for strings, for now). Pointers to vectors are stored as ty_rptr or ty_uniq wrapped around a ty_vec. There are no user-facing changes. Existing behaviour is preserved by special-casing many instances of pointers containing vectors. Hopefully with DST most of these hacks will go away. For now it is useful to leave them hanging around rather than abstracting them into a method or something.
Closes#13554.
In upgrading LLVM, only rust functions had the "split-stack" attribute added.
This commit changes the addition of LLVM's "split-stack" attribute to *always*
occur and then we remove it sometimes if the "no_split_stack" rust attribute is
present.
Closes#13625
This is a bit of an interesting upgrade to LLVM. Upstream LLVM has started using C++11 features, so they require a C++11 compiler to build. I've updated all the bots to have a C++11 compiler, and they appear to be building LLVM successfully:
* Linux bots - I added gcc/g++ 4.7 (good enough)
* Android bots - same as the linux ones
* Mac bots - I installed the most recent command line tools for Lion which gives us clang 3.2, but LLVM wouldn't build unless it was explicitly asked to link to `libc++` instead of `libstdc++`. This involved tweaking `mklldeps.py` and the `configure` script to get things to work out
* Windows bots - mingw-w64 has gcc 4.8.1 which is sufficient for building LLVM (hurray!)
* BSD bots - I updated FreeBSD to 10.0 which brought with it a relevant version of clang.
The largest fallout I've seen so far is that the test suite doesn't work at all on FreeBSD 10. We've already stopped gating on FreeBSD due to #13427 (we used to be on freebsd 9), so I don't think this puts us in too bad of a situation. I will continue to attempt to fix FreeBSD and the breakage on there.
The LLVM update brings with it all of the recently upstreamed LLVM patches. We only have one local patch now which is just an optimization, and isn't required to use upstream LLVM. I want to maintain compatibility with LLVM 3.3 and 3.4 while we can, and this upgrade is keeping us up to date with the 3.5 release. Once 3.5 is release we will in theory no longer require a bundled LLVM.
This comes with a number of fixes to be compatible with upstream LLVM:
* Previously all monomorphizations of "mem::size_of()" would receive the same
symbol. In the past LLVM would silently rename duplicated symbols, but it
appears to now be dropping the duplicate symbols and functions now. The symbol
names of monomorphized functions are now no longer solely based on the type of
the function, but rather the type and the unique hash for the
monomorphization.
* Split stacks are no longer a global feature controlled by a flag in LLVM.
Instead, they are opt-in on a per-function basis through a function attribute.
The rust #[no_split_stack] attribute will disable this, otherwise all
functions have #[split_stack] attached to them.
* The compare and swap instruction now takes two atomic orderings, one for the
successful case and one for the failure case. LLVM internally has an
implementation of calculating the appropriate failure ordering given a
particular success ordering (previously only a success ordering was
specified), and I copied that into the intrinsic translation so the failure
ordering isn't supplied on a source level for now.
* Minor tweaks to LLVM's API in terms of debuginfo, naming, c++11 conventions,
etc.
Part of this required added an override of `fold_type_method` in the
Folder for Ctx impl; it follows the same pattern as `fold_method`.
Also, as a drive-by fix, I moved all of the calls to `folder.new_id`
in syntax::fold's no-op default traversal to really be the first
statement in each function.
* This is to uphold the invariant that `folder.new_id` is always
called first (an unfortunate requirement of the current `ast_map`
code), an invariant that we seemingly were breaking in e.g. the
previous `noop_fold_block`.
* Now it should be easier to see when adding new code that this
invariant must be upheld.
* (note that the breakage in `noop_fold_block` may not have mattered
so much previously, since the only thing that blocks can bind are
lifetimes, which I am only adding support for now.)
Previously, if statements of the form "Foo;" or "let _ = Foo;" were encountered
where Foo had a destructor, the destructors were not run. This changes
the relevant locations in trans to check for ty::type_needs_drop and invokes
trans_to_lvalue instead of trans_into.
Closes#4734Closes#6892
This patch fixes issue #13186.
When generating constant expression for enum, it is possible that
alignment of expression may be not equal to alignment of type. In that
case space after last struct field must be padded to match size of value
and size of struct. This commit adds that padding.
See detailed explanation in src/test/run-pass/trans-tag-static-padding.rs
Cleans up some remnants of the old mutability system and only allows vector/trait mutability in `VstoreSlice` (`&mut [T]`) and `RegionTraitStore` (`&mut Trait`).
libstd: Implement `StrBuf`, a new string buffer type like `Vec`, and port all code over to use it.
Rebased & tests-fixed version of https://github.com/mozilla/rust/pull/13269
This commit makes sure that code inlined from other functions isn't assigned the source position of the call site, since this leads to undesired behavior when setting line breakpoints (issue #12886)
Previously, if statements of the form "Foo;" or "let _ = Foo;" were encountered
where Foo had a destructor, the destructors were not run. This changes
the relevant locations in trans to check for ty::type_needs_drop and invokes
trans_to_lvalue instead of trans_into.
Closes#4734Closes#6892
`RefCell::get` can be a bit surprising, because it actually clones the wrapped value. This removes `RefCell::get` and replaces all the users with `RefCell::borrow()` when it can, and `RefCell::borrow().clone()` when it can't. It removes `RefCell::set` for consistency. This closes#13182.
It also fixes an infinite loop in a test when debugging is on.
This was missed when dropping the null-termination from our string
types. An explicit null byte can still be placed anywhere in a string if
desired, but there's no reason to stick one at the end of every string
constant.
It's surprising that `RefCell::get()` is implicitly doing a clone
on a value. This patch removes it and replaces all users with
either `.borrow()` when we can autoderef, or `.borrow().clone()`
when we cannot.
This was missed when dropping the null-termination from our string
types. An explicit null byte can still be placed anywhere in a string if
desired, but there's no reason to stick one at the end of every string
constant.
This change removes the AbiSet from the AST, converting all usage to have just
one Abi value. The current scheme selects a relevant ABI given a list of ABIs
based on the target architecture and how relevant each ABI is to that
architecture.
Instead of this mildly complicated scheme, only one ABI will be allowed in abi
strings, and pseudo-abis will be created for special cases as necessary. For
example the "system" abi exists for stdcall on win32 and C on win64.
Closes#10049
This commit deals with the fallout of the previous change by making tuples
structs have public fields where necessary (now that the fields are private by
default).
Only supports crate level statics. No debug info is generated for function level statics. Closes#9227.
As discussed at the end of the comments for #9227, I took an initial stab at adding support for function level statics and decided it would be enough work to warrant being split into a separate issue.
See #13144 for the new issue describing the need to add support for function level static variables.
The `_match.rs` takes advantage of passes prior to `trans` and
aggressively prunes the sub-match tree based on exact equality. When it
comes to literal or range, the strategy may lead to wrong result if
there's guard function or multiple patterns inside tuple.
Closes#12582.
Closes#13027.
This needs to be removed as part of removing `~[T]`. Partial type hints
are now allowed, and will remove the need to add a version of this
method for `Vec<T>`. For now, this involves a few workarounds for
partial type hints not completely working.
This is the first step to replacing OptVec with a new representation:
remove all mutability. Any mutations have to go via `Vec` and then make
to `OptVec`.
Many of the uses of OptVec are unnecessary now that Vec has no-alloc
emptiness (and have been converted to Vec): the only ones that really
need it are the AST and sty's (and so on) where there are a *lot* of
instances of them, and they're (mostly) immutable.
These variants occur rarely but inflate the whole enum for the other variants, leaving a lot of wasted space. In total this reduces `ty::sty` from 160 bytes to 96 (on a 64-bit platform).
After this, `ty_struct` and `ty_enum` are the largest variants, with the 80-byte `substs` being the major contributor.
This reduces the size of sty from 112 to 96; like with the ty_trait
variant, this variant of sty occurs rarely (~1%) so the benefits are
large and the costs small.
This reduces ty::sty from 160 bytes to just 112, and some measurements
eddyb made suggest that the ty_trait variant occurs very
rarely (e.g. ~1% of all sty instances) hence this will result in a large
memory saving, and the cost of the indirection is unlikely to be an
issue.
Remove the linker_private and linker_private_weak linkage attributes,
they have been superseded by private and private_weak and have been
removed in upstream LLVM in commit r203866.
This PR enables the use of mutable slices in *mutable* static items. The work was started by @xales and I added a follow-up commit that moves the *immutable* restriction to the recently added `check_static`
Closes#11411
This commit removes all internal support for the previously used __log_level()
expression. The logging subsystem was previously modified to not rely on this
magical expression. This also removes the only other function to use the
module_data map in trans, decl_gc_metadata. It appears that this is an ancient
function from a GC only used long ago.
This does not remove the crate map entirely, as libgreen still uses it to hook
in to the event loop provided by libgreen.
This commit moves all logging out of the standard library into an external
crate. This crate is the new crate which is responsible for all logging macros
and logging implementation. A few reasons for this change are:
* The crate map has always been a bit of a code smell among rust programs. It
has difficulty being loaded on almost all platforms, and it's used almost
exclusively for logging and only logging. Removing the crate map is one of the
end goals of this movement.
* The compiler has a fair bit of special support for logging. It has the
__log_level() expression as well as generating a global word per module
specifying the log level. This is unfairly favoring the built-in logging
system, and is much better done purely in libraries instead of the compiler
itself.
* Initialization of logging is much easier to do if there is no reliance on a
magical crate map being available to set module log levels.
* If the logging library can be written outside of the standard library, there's
no reason that it shouldn't be. It's likely that we're not going to build the
highest quality logging library of all time, so third-party libraries should
be able to provide just as high-quality logging systems as the default one
provided in the rust distribution.
With a migration such as this, the change does not come for free. There are some
subtle changes in the behavior of liblog vs the previous logging macros:
* The core change of this migration is that there is no longer a physical
log-level per module. This concept is still emulated (it is quite useful), but
there is now only a global log level, not a local one. This global log level
is a reflection of the maximum of all log levels specified. The previously
generated logging code looked like:
if specified_level <= __module_log_level() {
println!(...)
}
The newly generated code looks like:
if specified_level <= ::log::LOG_LEVEL {
if ::log::module_enabled(module_path!()) {
println!(...)
}
}
Notably, the first layer of checking is still intended to be "super fast" in
that it's just a load of a global word and a compare. The second layer of
checking is executed to determine if the current module does indeed have
logging turned on.
This means that if any module has a debug log level turned on, all modules
with debug log levels get a little bit slower (they all do more expensive
dynamic checks to determine if they're turned on or not).
Semantically, this migration brings no change in this respect, but
runtime-wise, this will have a perf impact on some code.
* A `RUST_LOG=::help` directive will no longer print out a list of all modules
that can be logged. This is because the crate map will no longer specify the
log levels of all modules, so the list of modules is not known. Additionally,
warnings can no longer be provided if a malformed logging directive was
supplied.
The new "hello world" for logging looks like:
#[phase(syntax, link)]
extern crate log;
fn main() {
debug!("Hello, world!");
}
This commit goes back to using `gensym` to generate unique tokens to put into
the names of closures, allowing closures to be able to get demangled in
backtraces.
Closes#12400
There is a broader revision (that does this across the board) pending
in #12675, but that is awaiting the arrival of more data (to decide
whether to keep OptVec alive by using a non-Vec internally).
For this code, the representation of lifetime lists needs to be the
same in both ScopeChain and in the ast and ty structures. So it
seemed cleanest to just use `vec_ng::Vec`, now that it has a cheaper
empty representation than the current `vec` code.
It is often convenient to have forms of weak linkage or other various types of
linkage. Sadly, just using these flavors of linkage are not compatible with
Rust's typesystem and how it considers some pointers to be non-null.
As a compromise, this commit adds support for weak linkage to external symbols,
but it requires that this is only placed on extern statics of type `*T`.
Codegen-wise, we get translations like:
// rust code
extern {
#[linkage = "extern_weak"]
static foo: *i32;
}
// generated IR
@foo = extern_weak global i32
@_some_internal_symbol = internal global *i32 @foo
All references to the rust value of `foo` then reference `_some_internal_symbol`
instead of the symbol `_foo` itself. This allows us to guarantee that the
address of `foo` will never be null while the value may sometimes be null.
An example was implemented in `std::rt::thread` to determine if
`__pthread_get_minstack()` is available at runtime, and a test is checked in to
use it for a static value as well. Function pointers a little odd because you
still need to transmute the pointer value to a function pointer, but it's
thankfully better than not having this capability at all.
This leverages the new hashing framework and hashmap implementation to provide a
much speedier hashing algorithm for node ids and def ids. The hash algorithm
used is currentl FNV hashing, but it's quite easy to swap out.
I originally implemented hashing as the identity function, but this actually
ended up in slowing down rustc compiling libstd from 8s to 13s. I would suspect
that this is a result of a large number of collisions.
With FNV hashing, we get these timings (compiling with --no-trans, in seconds):
| | before | after |
|-----------|---------:|--------:|
| libstd | 8.324 | 6.703 |
| stdtest | 47.674 | 46.857 |
| libsyntax | 9.918 | 8.400 |
Cosmetic changes at best, but there are so many such typos that I couldn't ignore them. :) Some occurrences of typos are linked to the generated documentations but no changes should break the builds.
The llvm.copysign and llvm.round intrinsics weren't added until LLVM 3.4, so if
we're on LLVM 3.3 we lower these to calls in libm instead of LLVM intrinsics.
This should fix our travis failures.
This PR brings back limited debuginfo which allows for nice backtraces and breakpoints, but omits any info about variables and types.
The `-g` and `--debuginfo` command line options have been extended to take an optional argument:
`-g0` means no debug info.
`-g1` means line-tables only.
`-g2` means full debug info.
Specifying `-g` without argument is equivalent to `-g2`.
Fixes#12280
Closes#8506.
The `trans::adt` code for statics uses fields with `C_undef` values to
insert alignment padding (because LLVM's own alignment padding isn't
always sufficient for aggregate constants), and assumes that all fields
in the actual Rust value are represented by non-undef LLVM values, to
distinguish them from that padding.
But for nullable pointer enums, if non-null variant has fields other
than the pointer used as the discriminant, they would be set to undef in
the null case, to reflect that they're never accessed.
To avoid the obvious conflict between these two items, the latter undefs
were wrapped in unary LLVM structs to distinguish them from the former
undefs. Except this doesn't actually work -- LLVM, not unreasonably,
treats the "wrapped undef" as a regular undef.
So this commit just sets all fields to null in the null pointer case of
a nullable pointer enum static, because the other fields don't really
need to be undef in the first place.
The llvm.copysign and llvm.round intrinsics weren't added until LLVM 3.4, so if
we're on LLVM 3.3 we lower these to calls in libm instead of LLVM intrinsics.
This should fix our travis failures.
While we are not yet ready for compiler i18n, this also keeps the error handling code clean. The set of altered error messages was obtained by grepping for `"s"` and `(s)`, so there might be some missing messages.
Previously `ast::Arm` was always storing a single `ast::Expr` wrapped in an
`ast::Block` (for historical reasons, AIUI), so we might as just store
that expr directly.
Closes#3085.