Where ItemDecorator creates new items given a single item, ItemModifier
alters the tagged item in place. The expansion rules for this are a bit
weird, but I think are the most reasonable option available.
When an item is expanded, all ItemModifier attributes are stripped from
it and the item is folded through all ItemModifiers. At that point, the
process repeats until there are no ItemModifiers in the new item.
Formatting via reflection has been a little questionable for some time now, and
it's a little unfortunate that one of the standard macros will silently use
reflection when you weren't expecting it. This adds small bits of code bloat to
libraries, as well as not always being necessary. In light of this information,
this commit switches assert_eq!() to using {} in the error message instead of
{:?}.
In updating existing code, there were a few error cases that I encountered:
* It's impossible to define Show for [T, ..N]. I think DST will alleviate this
because we can define Show for [T].
* A few types here and there just needed a #[deriving(Show)]
* Type parameters needed a Show bound, I often moved this to `assert!(a == b)`
* `Path` doesn't implement `Show`, so assert_eq!() cannot be used on two paths.
I don't think this is much of a regression though because {:?} on paths looks
awful (it's a byte array).
Concretely speaking, this shaved 10K off a 656K binary. Not a lot, but sometime
significant for smaller binaries.
This new SVH is used to uniquely identify all crates as a snapshot in time of
their ABI/API/publicly reachable state. This current calculation is just a hash
of the entire crate's AST. This is obviously incorrect, but it is currently the
reality for today.
This change threads through the new Svh structure which originates from crate
dependencies. The concept of crate id hash is preserved to provide efficient
matching on filenames for crate loading. The inspected hash once crate metadata
is opened has been changed to use the new Svh.
The goal of this hash is to identify when upstream crates have changed but
downstream crates have not been recompiled. This will prevent the def-id drift
problem where upstream crates were recompiled, thereby changing their metadata,
but downstream crates were not recompiled.
In the future this hash can be expanded to exclude contents of the AST like doc
comments, but limitations in the compiler prevent this change from being made at
this time.
Closes#10207
This updates a number of ignore-test tests, and removes a few completely
outdated tests due to the feature being tested no longer being supported.
This brings a number of bench/shootout tests up to date so they're compiling
again. I make no claims to the performance of these benchmarks, it's just nice
to not have bitrotted code.
Closes#2604Closes#9407
This commit removes deriving(ToStr) in favor of deriving(Show), migrating all impls of ToStr to fmt::Show.
Most of the details can be found in the first commit message.
Closes#12477
This commit changes the ToStr trait to:
impl<T: fmt::Show> ToStr for T {
fn to_str(&self) -> ~str { format!("{}", *self) }
}
The ToStr trait has been on the chopping block for quite awhile now, and this is
the final nail in its coffin. The trait and the corresponding method are not
being removed as part of this commit, but rather any implementations of the
`ToStr` trait are being forbidden because of the generic impl. The new way to
get the `to_str()` method to work is to implement `fmt::Show`.
Formatting into a `&mut Writer` (as `format!` does) is much more efficient than
`ToStr` when building up large strings. The `ToStr` trait forces many
intermediate allocations to be made while the `fmt::Show` trait allows
incremental buildup in the same heap allocated buffer. Additionally, the
`fmt::Show` trait is much more extensible in terms of interoperation with other
`Writer` instances and in more situations. By design the `ToStr` trait requires
at least one allocation whereas the `fmt::Show` trait does not require any
allocations.
Closes#8242Closes#9806
With the stability attributes we can put public-but unstable modules next to others, so this moves `intrinsics` and `raw` out of the `unstable` module (and marks both as `#[experimental]`).
These two containers are indeed collections, so their place is in
libcollections, not in libstd. There will always be a hash map as part of the
standard distribution of Rust, but by moving it out of the standard library it
makes libstd that much more portable to more platforms and environments.
This conveniently also removes the stuttering of 'std::hashmap::HashMap',
although 'collections::HashMap' is only one character shorter.
This commit rewrites crate loading internally in attempt to look at less
metadata and provide nicer errors. The loading is now split up into a few
stages:
1. Collect a mapping of (hash => ~[Path]) for a set of candidate libraries for a
given search. The hash is the hash in the filename and the Path is the
location of the library in question. All candidates are filtered based on
their prefix/suffix (dylib/rlib appropriate) and then the hash/version are
split up and are compared (if necessary).
This means that if you're looking for an exact hash of library you don't have
to open up the metadata of all libraries named the same, but also in your
path.
2. Once this mapping is constructed, each (hash, ~[Path]) pair is filtered down
to just a Path. This is necessary because the same rlib could show up twice
in the path in multiple locations. Right now the filenames are based on just
the crate id, so this could be indicative of multiple version of a crate
during one crate_id lifetime in the path. If multiple duplicate crates are
found, an error is generated.
3. Now that we have a mapping of (hash => Path), we error on multiple versions
saying that multiple versions were found. Only if there's one (hash => Path)
pair do we actually return that Path and its metadata.
With this restructuring, it restructures code so errors which were assertions
previously are now first-class errors. Additionally, this should read much less
metadata with lots of crates of the same name or same version in a path.
Closes#11908
The new methodology can be found in the re-worded comment, but the gist of it is
that -C prefer-dynamic doesn't turn off static linkage. The error messages
should also be a little more sane now.
Closes#12133
Externally loaded libraries are able to do things that cause references
to them to survive past the expansion phase (e.g. creating @-box cycles,
launching a task or storing something in task local data). As such, the
library has to stay loaded for the lifetime of the process.
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This was the original intention of the privacy of structs, and it was
erroneously implemented before. A pub struct will now have default-pub fields,
and a non-pub struct will have default-priv fields. This essentially brings
struct fields in line with enum variants in terms of inheriting visibility.
As usual, extraneous modifiers to visibility are disallowed depend on the case
that you're dealing with.
Closes#11522
Now that procedural macros can be implemented outside of the compiler,
it's more important to have a reasonable API to work with. Here are the
basic changes:
* Rename SyntaxExpanderTTTrait to MacroExpander, SyntaxExpanderTT to
BasicMacroExpander, etc. I think "procedural macro" is the right
term for these now, right? The other option would be SynExtExpander
or something like that.
* Stop passing the SyntaxContext to extensions. This was only ever used
by macro_rules, which doesn't even use it anymore. I can't think of
a context in which an external extension would need it, and removal
allows the API to be significantly simpler - no more
SyntaxExpanderTTItemExpanderWithoutContext wrappers to worry about.
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
The comments have more information as to why this is done, but the basic idea is
that finding an exported trait is actually a fairly difficult problem. The true
answer lies in whether a trait is ever referenced from another exported method,
and right now this kind of analysis doesn't exist, so the conservative answer of
"yes" is always returned to answer whether a trait is exported.
Closes#11224Closes#11225
This replaces the link meta attributes with a pkgid attribute and uses a hash
of this as the crate hash. This makes the crate hash computable by things
other than the Rust compiler. It also switches the hash function ot SHA1 since
that is much more likely to be available in shell, Python, etc than SipHash.
Fixes#10188, #8523.
In this series of commits, I've implemented static linking for rust. The scheme I implemented was the same as my [mailing list post](https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html).
The commits have more details to the nitty gritty of what went on. I've rebased this on top of my native mutex pull request (#10479), but I imagine that it will land before this lands, I just wanted to pre-emptively get all the rebase conflicts out of the way (becuase this is reorganizing building librustrt as well).
Some contentious points I want to make sure are all good:
* I've added more "compiler chooses a default" behavior than I would like, I want to make sure that this is all very clearly outlined in the code, and if not I would like to remove behavior or make it clearer.
* I want to make sure that the new "fancy suite" tests are ok (using make/python instead of another rust crate)
If we do indeed pursue this, I would be more than willing to write up a document describing how linking in rust works. I believe that this behavior should be very understandable, and the compiler should never hinder someone just because linking is a little fuzzy.
This commit alters the build process of the compiler to build a static
librustrt.a instead of a dynamic version. This means that we can stop
distributing librustrt as well as default linking against it in the compiler.
This also means that if you attempt to build rust code without libstd, it will
no longer work if there are any landing pads in play. The reason for this is
that LLVM and rustc will emit calls to the various upcalls in librustrt used to
manage exception handling. In theory we could split librustrt into librustrt and
librustupcall. We would then distribute librustupcall and link to it for all
programs using landing pads, but I would rather see just one librustrt artifact
and simplify the build process.
The major benefit of doing this is that building a static rust library for use
in embedded situations all of a sudden just became a whole lot more feasible.
Closes#3361
I added a test case which does not compile today, and required changes on
privacy's side of things to get right. Additionally, this moves a good bit of
logic which did not belong in reachability into privacy.
All of reachability should solely be responsible for determining what the
reachable surface area of a crate is given the exported surface area (where the
exported surface area is that which is usable by external crates).
Privacy will now correctly figure out what's exported by deeply looking
through reexports. Previously if a module were reexported under another name,
nothing in the module would actually get exported in the executable. I also
consolidated the phases of privacy to be clearer about what's an input to what.
The privacy checking pass no longer uses the notion of an "all public" path, and
the embargo visitor is no longer an input to the checking pass.
Currently the embargo visitor is built as a saturating analysis because it's
unknown what portions of the AST are going to get re-exported.
This also cracks down on exported methods from impl blocks and trait blocks. If you implement a private trait, none of the symbols are exported, and if you have an impl for a private type none of the symbols are exported either. On the other hand, if you implement a public trait for a private type, the symbols are still exported. I'm unclear on whether this last part is correct, but librustc will fail to link unless it's in place.
I added a test case which does not compile today, and required changes on
privacy's side of things to get right. Additionally, this moves a good bit of
logic which did not belong in reachability into privacy.
All of reachability should solely be responsible for determining what the
reachable surface area of a crate is given the exported surface area (where the
exported surface area is that which is usable by external crates).
Privacy will now correctly figure out what's exported by deeply looking
through reexports. Previously if a module were reexported under another name,
nothing in the module would actually get exported in the executable. I also
consolidated the phases of privacy to be clearer about what's an input to what.
The privacy checking pass no longer uses the notion of an "all public" path, and
the embargo visitor is no longer an input to the checking pass.
Currently the embargo visitor is built as a saturating analysis because it's
unknown what portions of the AST are going to get re-exported.
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
This isn't quite as fancy as the struct in #9913, but I'm not sure we should be exposing crate names/hashes of the types. That being said, it'd be pretty easy to extend this (the deterministic hashing regardless of what crate you're in was the hard part).
Previously, all functions called by a reachable function were considered
reachable, but this is only the case if the original function was possibly
inlineable (if it's type generic or #[inline]-flagged).
When re-exporting a trait/structure/enum, then we need to propagate the
reachability of the type through the methods that are defined on it.
Closes#9906Closes#9968
This commit resumes management of the stack boundaries and limits when switching
between tasks. This additionally leverages the __morestack function to run code
on "stack overflow". The current behavior is to abort the process, but this is
probably not the best behavior in the long term (for deails, see the comment I
wrote up in the stack exhaustion routine).
This fixes a bug in which the visibility rules were approximated by
reachability, but forgot to cover the case where a 'pub use' reexports a private
item. This fixes the commit by instead using the results of the privacy pass of
the compiler to create the initial working set of the reachability pass.
This may have the side effect of increasing the size of metadata, but it's
difficult to avoid for correctness purposes sadly.
Closes#9790
This makes sure that the top-level crate name is correct when emitting log
statements for a monomorphized function in another crate. This happens by
tracing the monomorphized ID back to the external source and then using that
crate index to get the name of the crate.
Closes#3046
It is simply defined as `f64` across every platform right now.
A use case hasn't been presented for a `float` type defined as the
highest precision floating point type implemented in hardware on the
platform. Performance-wise, using the smallest precision correct for the
use case greatly saves on cache space and allows for fitting more
numbers into SSE/AVX registers.
If there was a use case, this could be implemented as simply a type
alias or a struct thanks to `#[cfg(...)]`.
Closes#6592
The mailing list thread, for reference:
https://mail.mozilla.org/pipermail/rust-dev/2013-July/004632.html
If an item is skipped due to it being unreachable or for some optimization, then
it shouldn't be encoded into the metadata (because it wasn't present in the
first place).
This fixes private statics and functions from being usable cross-crates, along
with some bad privacy error messages. This is a reopening of #8365 with all the
privacy checks in privacy.rs instead of resolve.rs (where they should be
anyway).
These maps of exported items will hopefully get used for generating
documentation by rustdoc
Closes#8592
Progress on #7981
This doesn't completely close the issue because `struct A;` is still allowed, and it's a much larger change to disallow that. I'm also not entirely sure that we want to disallow that. Regardless, punting that discussion to the issue instead.
This fixes private statics and functions from being usable cross-crates, along
with some bad privacy error messages. This is a reopening of #8365 with all the
privacy checks in privacy.rs instead of resolve.rs (where they should be
anyway).
These maps of exported items will hopefully get used for generating
documentation by rustdoc
Closes#8592
If a static is flagged as address_insignificant, then for LLVM to actually
perform the relevant optimization it must have an internal linkage type. What
this means, though, is that the static will not be available to other crates.
Hence, if you have a generic function with an inner static, it will fail to link
when built as a library because other crates will attempt to use the inner
static externally.
This gets around the issue by inlining the static into the metadata. The same
relevant optimization is then applied separately in the external crate. What
this ends up meaning is that all statics tagged with #[address_insignificant]
will appear at most once per crate (by value), but they could appear in multiple
crates.
This should be the last blocker for using format! ...
This doesn't close any bugs as the goal is to convert the parameter to by-value, but this is a step towards being able to make guarantees about `&T` pointers (where T is Freeze) to LLVM.
In #8185 cross-crate condition handlers were fixed by ensuring that globals
didn't start appearing in different crates with different addressed. An
unfortunate side effect of that pull request is that constants weren't inlined
across crates (uint::bits is unknown to everything but libstd).
This commit fixes this inlining by using the `available_eternally` linkage
provided by LLVM. It partially reverts #8185, and then adds support for this
linkage type. The main caveat is that not all statics could be inlined into
other crates. Before this patch, all statics were considered "inlineable items",
but an unfortunate side effect of how we deal with `&static` and `&[static]`
means that these two cases cannot be inlined across crates. The translation of
constants was modified to propogate this condition of whether a constant
should be considered inlineable into other crates.
Closes#9036
In #8185 cross-crate condition handlers were fixed by ensuring that globals
didn't start appearing in different crates with different addressed. An
unfortunate side effect of that pull request is that constants weren't inlined
across crates (uint::bits is unknown to everything but libstd).
This commit fixes this inlining by using the `available_eternally` linkage
provided by LLVM. It partially reverts #8185, and then adds support for this
linkage type. The main caveat is that not all statics could be inlined into
other crates. Before this patch, all statics were considered "inlineable items",
but an unfortunate side effect of how we deal with `&static` and `&[static]`
means that these two cases cannot be inlined across crates. The translation of
constants was modified to propogate this condition of whether a constant
should be considered inlineable into other crates.
Closes#9036
While they may have the same name within various scopes, this changes static
names to use path_pretty_name to append some hash information at the end of the
symbol. We're then guaranteed that each static has a unique NodeId, so this
NodeId is as the "hash" of the pretty name.
Closes#9188
Remove these in favor of the two traits themselves and the wrapper
function std::from_str::from_str.
Add the function std::num::from_str_radix in the corresponding role for
the FromStrRadix trait.
While they may have the same name within various scopes, this changes static
names to use path_pretty_name to append some hash information at the end of the
symbol. We're then guaranteed that each static has a unique NodeId, so this
NodeId is as the "hash" of the pretty name.
Closes#9188
The trait will keep the `Iterator` naming, but a more concise module
name makes using the free functions less verbose. The module will define
iterables in addition to iterators, as it deals with iteration in
general.
These commits fix bugs related to identically named statics in functions of implementations in various situations. The commit messages have most of the information about what bugs are being fixed and why.
As a bonus, while I was messing around with name mangling, I improved the backtraces we'll get in gdb by removing `__extensions__` for the trait/type being implemented and by adding the method name as well. Yay!
There are 6 new compiler recognised attributes: deprecated, experimental,
unstable, stable, frozen, locked (these levels are taken directly from
Node's "stability index"[1]). These indicate the stability of the
item to which they are attached; e.g. `#[deprecated] fn foo() { .. }`
says that `foo` is deprecated.
This comes with 3 lints for the first 3 levels (with matching names) that
will detect the use of items marked with them (the `unstable` lint
includes items with no stability attribute). The attributes can be given
a short text note that will be displayed by the lint. An example:
#[warn(unstable)]; // `allow` by default
#[deprecated="use `bar`"]
fn foo() { }
#[stable]
fn bar() { }
fn baz() { }
fn main() {
foo(); // "warning: use of deprecated item: use `bar`"
bar(); // all fine
baz(); // "warning: use of unmarked item"
}
The lints currently only check the "edges" of the AST: i.e. functions,
methods[2], structs and enum variants. Any stability attributes on modules,
enums, traits and impls are not checked.
[1]: http://nodejs.org/api/documentation.html
[2]: the method check is currently incorrect and doesn't work.
As with the previous commit, this is targeted at removing the possibility of
collisions between statics. The main use case here is when there's a
type-parametric function with an inner static that's compiled as a library.
Before this commit, any impl would generate a path item of "__extensions__".
This changes this identifier to be a "pretty name", which is either the last
element of the path of the trait implemented or the last element of the type's
path that's being implemented. That doesn't quite cut it though, so the (trait,
type) pair is hashed and again used to append information to the symbol.
Essentially, __extensions__ was removed for something nicer for debugging, and
then some more information was added to symbol name by including a hash of the
trait being implemented and type it's being implemented for. This should prevent
colliding names for inner statics in regular functions with similar names.
Before, the path name for all items defined in methods of traits and impls never
took into account the name of the method. This meant that if you had two statics
of the same name in two different methods the statics would end up having the
same symbol named (even after mangling) because the path components leading to
the symbol were exactly the same (just __extensions__ and the static name).
It turns out that if you add the symbol "A" twice to LLVM, it automatically
makes the second one "A1" instead of "A". What this meant is that in local crate
compilations we never found this bug. Even across crates, this was never a
problem. The problem arises when you have generic methods that don't get
generated at compile-time of a library. If the statics were re-added to LLVM by
a client crate of a library in a different order, you would reference different
constants (the integer suffixes wouldn't be guaranteed to be the same).
This fixes the problem by adding the method name to symbol path when building
the ast_map. In doing so, two symbols in two different methods are disambiguated
against.
Whenever a generic function was encountered, only the top-level items were
recursed upon, even though the function could contain items inside blocks or
nested inside of other expressions. This fixes the existing code from traversing
just the top level items to using a Visitor to deeply recurse and find any items
which need to be translated.
This was uncovered when building code with --lib, because the encode_symbol
function would panic once it found that an item hadn't been translated.
Closes#8134
For #7083.
The metadata issue with the old version is now fixed. Ready for review.
This is also not the full solution to #7083, because this is not supported yet:
```
trait Foo : Send { }
impl <T: Send> Foo for T { }
fn foo<T: Foo>(val: T, chan: std::comm::Chan<T>) {
chan.send(val);
}
```
cc @nikomatsakis
* All globals marked as `pub` won't have the `internal` linkage type set
* All global references across crates are forced to use the address of the
global in the other crate via an external reference.
r? @graydon
Closes#8179
* All globals marked as `pub` won't have the `internal` linkage type set
* All global references across crates are forced to use the address of the
global in the other crate via an external reference.
Previously having optional lang_items caused an assertion failure at
compile-time, and then once that was fixed there was a segfault at runtime of
using a NULL crate-map (crates with no_std)