With the recently added double word CAS functionality on 32-bit ARM (enabled via
a 64-bit atomic instruction in LLVM IR), without some extra features enabled
LLVM lowers code to function calls which emulate atomic instructions. With the
v7 feature enabled, proper 64-bit CAS instructions are used on 32-bit arm.
I've been told that v7 for arm is what we should have been doing anyway. This is
overridable by providing some other non-empty feature string.
This removes @[] from the parser as well as much of the handling of it (and `@str`) from the compiler as I can find.
I've just rebased @pcwalton's (already reviewed) `@str` removal (and fixed the problems in a separate commit); the only new work is the trailing commits with my authorship.
Closes#11967
I tried a couple of different ways to squash this, and still don't think this is ideal, but I wanted to get it out for feedback.
Closes#5900Closes#9942
There are a few scenarios where the compiler tries to evaluate CastExprs without the corresponding types being available yet in the type context: https://github.com/mozilla/rust/issues/10618, https://github.com/mozilla/rust/issues/5900, https://github.com/mozilla/rust/issues/9942
This PR takes the approach of having eval_const_expr_partial's CastExpr arm fall back to a limited ast_ty_to_ty call that only checks for (a subset of) valid const types, when the direct type lookup fails. It's kind of hacky, so I understand if you don't want to take this as is. I'd need a little mentoring to get this into better shape, as figuring out the proper fix has been a little daunting. I'm also happy if someone else wants to pick this up and run with it.
This closes 5900 and 9942, but only moves the goalposts a little on 10618, which now falls over in a later phase of the compiler.
For the purpose of deciding whether to truncate or extend the right hand side of bit shifts, use the size of the element type for SIMD vector types.
Fix#11900.
It was possible to trigger a stack overflow in rustc because the routine used to verify enum representability,
type_structurally_contains, would recurse on inner types until hitting the original type. The overflow condition was when a different structurally recursive type (enum or struct) was contained in the type being checked.
I suspect my solution isn't as efficient as it could be. I pondered adding a cache of previously-seen types to avoid duplicating work (if enums A and B both contain type C, my code goes through C twice), but I didn't want to do anything that may not be necessary.
I'm a new contributor, so please pay particular attention to any unidiomatic code, misuse of terminology, bad naming of tests, or similar horribleness :)
Updated to verify struct representability as well.
Fixes#3008.
Fixes#3779.
`Times::times` was always a second-class loop because it did not support the `break` and `continue` operations. Its playful appeal (which I liked) was then lost after `do` was disabled for closures. It's time to let this one go.
`Times::times` was always a second-class loop because it did not support the `break` and `continue` operations. Its playful appeal was then lost after `do` was disabled for closures. It's time to let this one go.
This can almost be fully disabled, as it no longer breaks retrieving a
backtrace on OS X as verified by @alexcrichton. However, it still
breaks retrieving the values of parameters. This should be fixable in
the future via a proper location list...
Closes#7477
The general consensus is that we want to move away from conditions for I/O, and I propose a two-step plan for doing so:
1. Warn about unused `Result` types. When all of I/O returns `Result`, it will require you inspect the return value for an error *only if* you have a result you want to look at. By default, for things like `write` returning `Result<(), Error>`, these will all go silently ignored. This lint will prevent blind ignorance of these return values, letting you know that there's something you should do about them.
2. Implement a `try!` macro:
```
macro_rules! try( ($e:expr) => (match $e { Ok(e) => e, Err(e) => return Err(e) }) )
```
With these two tools combined, I feel that we get almost all the benefits of conditions. The first step (the lint) is a sanity check that you're not ignoring return values at callsites. The second step is to provide a convenience method of returning early out of a sequence of computations. After thinking about this for awhile, I don't think that we need the so-called "do-notation" in the compiler itself because I think it's just *too* specialized. Additionally, the `try!` macro is super lightweight, easy to understand, and works almost everywhere. As soon as you want to do something more fancy, my answer is "use match".
Basically, with these two tools in action, I would be comfortable removing conditions. What do others think about this strategy?
----
This PR specifically implements the `unused_result` lint. I actually added two lints, `unused_result` and `unused_must_use`, and the first commit has the rationale for why `unused_result` is turned off by default.
In line with the dissolution of libextra - #8784 - this moves arena and glob into
their own respective modules. Updates .gitignore with the entries
doc/{arena,glob} in accordance.
In line with the dissolution of libextra - #8784 - moves arena to its own library libarena.
Changes based on PR #11787. Updates .gitignore to ignore doc/arena.
I attempted to implement the lint in two steps. My first attempt was a
default-warn lint about *all* unused results. While this attempt did indeed find
many possible bugs, I felt that the false-positive rate was too high to be
turned on by default for all of Rust.
My second attempt was to make unused-result a default-allow lint, but allow
certain types to opt-in to the notion of "you must use this". For example, the
Result type is now flagged with #[must_use]. This lint about "must use" types is
warn by default (it's different from unused-result).
The unused_must_use lint had a 100% hit rate in the compiler, but there's not
that many places that return Result right now. I believe that this lint is a
crucial step towards moving away from conditions for I/O (because all I/O will
return Result by default). I'm worried that this lint is a little too specific
to Result itself, but I believe that the false positive rate for the
unused_result lint is too high to make it useful when turned on by default.
Set "Dwarf Version" to 2 on OS X to avoid toolchain incompatibility, and
set "Debug Info Version" to prevent debug info from being stripped from
bitcode.
Fixes#11352.
Set "Dwarf Version" to 2 on OS X to avoid toolchain incompatibility, and
set "Debug Info Version" to prevent debug info from being stripped from
bitcode.
Fixes#11352.
cc #7621.
See the commit message. I'm not sure if we should merge this now, or wait until we can write `Clone::clone(x)` which will directly solve the above issue with perfect error messages.
This unfortunately changes an error like
error: mismatched types: expected `&&NotClone` but found `&NotClone`
into
error: type `NotClone` does not implement any method in scope named `clone`
Non-exhaustive change list:
* `self` is now present in argument lists (modulo type-checking code I don't trust myself to refactor)
* methods have the same calling convention as bare functions (including the self argument)
* the env param is gone from all bare functions (and methods), only used by closures and `proc`s
* bare functions can only be coerced to closures and `proc`s if they are statically resolved, as they now require creating a wrapper specific to that function, to avoid indirect wrappers (equivalent to `impl<..Args, Ret> Fn<..Args, Ret> for fn(..Args) -> Ret`) that might not be optimizable by LLVM and don't work for `proc`s
* refactored some `trans::closure` code, leading to the removal of `trans::glue::make_free_glue` and `ty_opaque_closure_ptr`
It was decided a long, long time ago that libextra should not exist, but rather its modules should be split out into smaller independent libraries maintained outside of the compiler itself. The theory was to use `rustpkg` to manage dependencies in order to move everything out of the compiler, but maintain an ease of usability.
Sadly, the work on `rustpkg` isn't making progress as quickly as expected, but the need for dissolving libextra is becoming more and more pressing. Because of this, we've thought that a good interim solution would be to simply package more libraries with the rust distribution itself. Instead of dissolving libextra into libraries outside of the mozilla/rust repo, we can dissolve libraries into the mozilla/rust repo for now.
Work on this has been excruciatingly painful in the past because the makefiles are completely opaque to all but a few. Adding a new library involved adding about 100 lines spread out across 8 files (incredibly error prone). The first commit of this pull request targets this pain point. It does not rewrite the build system, but rather refactors large portions of it. Afterwards, adding a new library is as simple as modifying 2 lines (easy, right?). The build system automatically keeps track of dependencies between crates (rust *and* native), promotes binaries between stages, tracks dependencies of installed tools, etc, etc.
With this newfound buildsystem power, I chose the `extra::flate` module as the first candidate for removal from libextra. While a small module, this module is relative complex in that is has a C dependency and the compiler requires it (messing with the dependency graph a bit). Albeit I modified more than 2 lines of makefiles to accomodate libflate (the native dependency required 2 extra lines of modifications), but the removal process was easy to do and straightforward.
---
Testing-wise, I've cross-compiled, run tests, built some docs, installed, uninstalled, etc. I'm still working out a few kinks, and I'm sure that there's gonna be built system issues after this, but it should be working well for basic use!
cc #8784
This is hopefully the beginning of the long-awaited dissolution of libextra.
Using the newly created build infrastructure for building libraries, I decided
to move the first module out of libextra.
While not being a particularly meaty module in and of itself, the flate module
is required by rustc and additionally has a native C dependency. I was able to
very easily split out the C dependency from rustrt, update librustc, and
magically everything gets installed to the right locations and built
automatically.
This is meant to be a proof-of-concept commit to how easy it is to remove
modules from libextra now. I didn't put any effort into modernizing the
interface of libflate or updating it other than to remove the one glob import it
had.
This was the original intention of the privacy of structs, and it was
erroneously implemented before. A pub struct will now have default-pub fields,
and a non-pub struct will have default-priv fields. This essentially brings
struct fields in line with enum variants in terms of inheriting visibility.
As usual, extraneous modifiers to visibility are disallowed depend on the case
that you're dealing with.
Closes#11522
They all have to go into a single module at the moment unfortunately.
Ideally, the logging macros would live in std::logging, condition! would
live in std::condition, format! in std::fmt, etc. However, this
introduces cyclic dependencies between those modules and the macros they
use which the current expansion system can't deal with. We may be able
to get around this by changing the expansion phase to a two-pass system
but that's for a later PR.
Closes#2247
cc #11763
By default, the compiler and libraries are all still built with rpaths, but this
can be opted out of with --disable-rpath to ./configure or --no-rpath to rustc.
Closes#5219
By default, the compiler and libraries are all still built with rpaths, but this
can be opted out of with --disable-rpath to ./configure or --no-rpath to rustc.
cc #5219
They all have to go into a single module at the moment unfortunately.
Ideally, the logging macros would live in std::logging, condition! would
live in std::condition, format! in std::fmt, etc. However, this
introduces cyclic dependencies between those modules and the macros they
use which the current expansion system can't deal with. We may be able
to get around this by changing the expansion phase to a two-pass system
but that's for a later PR.
Closes#2247
cc #11763
Before this commit, rustc looked in `dirname $0`/../lib for libraries
but that doesn't work when rustc is invoked through a symlink.
This commit makes rustc look in `dirname $(readlink $0)`/../lib, i.e.
it first canonicalizes the symlink before walking up the directory tree.
Fixes#3632.
The old method of serializing the AST gives totally bogus spans if the
expansion of an imported macro causes compilation errors. The best
solution seems to be to serialize the actual textual macro definition
and load it the same way the std-macros are. I'm not totally confident
that getting the source from the CodeMap will always do the right thing,
but it seems to work in simple cases.
A mutable and immutable borrow place some restrictions on what you can
with the variable until the borrow ends. This commit attempts to convey
to the user what those restrictions are. Also, if the original borrow is
a mutable borrow, the error message has been changed (more specifically,
i. "cannot borrow `x` as immutable because it is also borrowed as
mutable" and ii. "cannot borrow `x` as mutable more than once" have
been changed to "cannot borrow `x` because it is already borrowed as
mutable").
In addition, this adds a (custom) span note to communicate where the
original borrow ends.
```rust
fn main() {
match true {
true => {
let mut x = 1;
let y = &x;
let z = &mut x;
}
false => ()
}
}
test.rs:6:21: 6:27 error: cannot borrow `x` as mutable because it is already borrowed as immutable
test.rs:6 let z = &mut x;
^~~~~~
test.rs:5:21: 5:23 note: previous borrow of `x` occurs here; the immutable borrow prevents subsequent moves or mutable borrows of `x` until the borrow ends
test.rs:5 let y = &x;
^~
test.rs:7:10: 7:10 note: previous borrow ends here
test.rs:3 true => {
test.rs:4 let mut x = 1;
test.rs:5 let y = &x;
test.rs:6 let z = &mut x;
test.rs:7 }
^
```
```rust
fn foo3(t0: &mut &mut int) {
let t1 = &mut *t0;
let p: &int = &**t0;
}
fn main() {}
test.rs:3:19: 3:24 error: cannot borrow `**t0` because it is already borrowed as mutable
test.rs:3 let p: &int = &**t0;
^~~~~
test.rs:2:14: 2:22 note: previous borrow of `**t0` as mutable occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `**t0` until the borrow ends
test.rs:2 let t1 = &mut *t0;
^~~~~~~~
test.rs:4:2: 4:2 note: previous borrow ends here
test.rs:1 fn foo3(t0: &mut &mut int) {
test.rs:2 let t1 = &mut *t0;
test.rs:3 let p: &int = &**t0;
test.rs:4 }
^
```
For the "previous borrow ends here" note, if the span is too long (has too many lines), then only the first and last lines are printed, and the middle is replaced with dot dot dot:
```rust
fn foo3(t0: &mut &mut int) {
let t1 = &mut *t0;
let p: &int = &**t0;
}
fn main() {}
test.rs:3:19: 3:24 error: cannot borrow `**t0` because it is already borrowed as mutable
test.rs:3 let p: &int = &**t0;
^~~~~
test.rs:2:14: 2:22 note: previous borrow of `**t0` as mutable occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `**t0` until the borrow ends
test.rs:2 let t1 = &mut *t0;
^~~~~~~~
test.rs:7:2: 7:2 note: previous borrow ends here
test.rs:1 fn foo3(t0: &mut &mut int) {
...
test.rs:7 }
^
```
(Sidenote: the `span_end_note` currently also has issue #11715)
Renamed the invert() function in iter.rs to flip().
Also renamed the Invert<T> type to Flip<T>.
Some related code comments changed. Documentation that I could find has
been updated, and all the instances I could locate where the
function/type were called have been updated as well.
A mutable and immutable borrow place some restrictions on what you can
with the variable until the borrow ends. This commit attempts to convey
to the user what those restrictions are. Also, if the original borrow is
a mutable borrow, the error message has been changed (more specifically,
i. "cannot borrow `x` as immutable because it is also borrowed as
mutable" and ii. "cannot borrow `x` as mutable more than once" have
been changed to "cannot borrow `x` because it is already borrowed as
mutable").
In addition, this adds a (custom) span note to communicate where the
original borrow ends.
The old method of serializing the AST gives totally bogus spans if the
expansion of an imported macro causes compilation errors. The best
solution seems to be to serialize the actual textual macro definition
and load it the same way the std-macros are. I'm not totally confident
that getting the source from the CodeMap will always do the right thing,
but it seems to work in simple cases.
Before this commit, rustc looked in `dirname $0`/../lib for libraries
but that doesn't work when rustc is invoked through a symlink.
This commit makes rustc look in `dirname $(readlink $0)`/../lib, i.e.
it first canonicalizes the symlink before walking up the directory tree.
Fixes#3632.
When there is `println!` macro in the code, compiling is never end.
```rust
// print.rs
fn main() {
println!("Hello!");
}
```
```bash
$ RUST_LOG=rustc rustc print.rs
```
And this is a part of output from stderr.
```bash
# ...
Looking up syntax::ast::DefId{crate: 1u32, node: 176234u32}
looking up syntax::ast::DefId{crate: 1u32, node: 176235u32} : extra::ebml::Doc<>{data: &[168u8, 16u8, 0u8, 0u8, 16u8, 51u8, 101u8, 53u8, 97u8, 101u8, 98u8, 56u8, 51u8, 55u8, 97u8, 101u8, 49u8, 54u8, 50u8
# ...
# vector which has infinite length.
```
* note : rust 0.9, 0.10-pre
[On 2013-12-06, I wrote to the rust-dev mailing list](https://mail.mozilla.org/pipermail/rust-dev/2013-December/007263.html):
> Subject: Let’s avoid having both foo() and foo_opt()
>
> We have some functions and methods such as [std::str::from_utf8](http://static.rust-lang.org/doc/master/std/str/fn.from_utf8.html) that may succeed and give a result, or fail when the input is invalid.
>
> 1. Sometimes we assume the input is valid and don’t want to deal with the error case. Task failure works nicely.
>
> 2. Sometimes we do want to do something different on invalid input, so returning an `Option<T>` works best.
>
> And so we end up with both `from_utf8` and `from_utf8`. This particular case is worse because we also have `from_utf8_owned` and `from_utf8_owned_opt`, to cover everything.
>
> Multiplying names like this is just not good design. I’d like to reduce this pattern.
>
> Getting behavior 1. when you have 2. is easy: just call `.unwrap()` on the Option. I think we should rename every `foo_opt()` function or method to just `foo`, remove the old `foo()` behavior, and tell people (through documentation) to use `foo().unwrap()` if they want it back?
>
> The downsides are that unwrap is more verbose and gives less helpful error messages on task failure. But I think it’s worth it.
The email discussion has gone around long enough. Let’s discuss a concrete proposal. For the following functions or methods, I removed `foo` (that caused task failure) and renamed `foo_opt` (that returns `Option`) to just `foo`.
Vector methods:
* `get_opt` (rename only, `get` did not exist as it would have been just `[]`)
* `head_opt`
* `last_opt`
* `pop_opt`
* `shift_opt`
* `remove_opt`
`std::path::BytesContainer` method:
* `container_as_str_opt`
`std::str` functions:
* `from_utf8_opt`
* `from_utf8_owned_opt` (also remove the now unused `not_utf8` condition)
Is there something else that should recieve the same treatement?
I did not rename `recv_opt` on channels based on @brson’s [feedback](https://mail.mozilla.org/pipermail/rust-dev/2013-December/007270.html).
Feel free to pick only some of these commits.
To build for the cortex-M series ARM processors LLC needs to be told to build for the thumb instruction set. There are two ways to do this, either with the triple "thumb\*-\*-\*" or with -march=thumb (which just overrides the triple anyway). I chose the first way.
The following will fail because the local cc doesn't know what to do with -mthumb.
````
rustc test.rs --lib --target thumb-linux-eab
error: linking with `cc` failed: exit code: 1
note: cc: error: unrecognized command line option ‘-mthumb’
````
Changing the linker works as expected.
````
rustc test.rs --lib --target thumb-linux-eabi --linker arm-none-eabi-gcc
````
Ideally I'd have the triple thumb-none-eabi, but adding a new OS looks like much more work (and I'm not familiar enough with what it does to know if it is needed).
* Stop using hardcoded numbers that have to all get updated when something changes (inevitable errors and rebase conflicts) as well as removes some unneeded -Z options (obsoleted over time).
* Remove `std::rt::borrowck`
The included test case would essentially never finish compiling without this
patch. It recursies twice at every ExprParen meaning that the branching factor
is 2^n
The included test case will take so long to parse on the old compiler that it'll
surely never let this crop up again.
The included test case would essentially never finish compiling without this
patch. It recursies twice at every ExprParen meaning that the branching factor
is 2^n
The included test case will take so long to parse on the old compiler that it'll
surely never let this crop up again.
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
Previously, they were treated like ~[] and &[] (which can have length
0), but fixed length vectors are fixed length, i.e. we know at compile
time if it's possible to have length zero (which is only for [T, .. 0]).
Fixes#11659.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
cc #11119
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
too.
Previously I had omitted this case since function calls don't get the same
treatment on the RHS, but it's different on the pattern and is more consistent
-- the goal is to identify `let` statements where `ref` bindings create
interior pointers.
This is a first pass on support for procedural macros that aren't hardcoded into libsyntax. It is **not yet ready to merge** but I've opened a PR to have a chance to discuss some open questions and implementation issues.
Example
=======
Here's a silly example showing off the basics:
my_synext.rs
```rust
#[feature(managed_boxes, globs, macro_registrar, macro_rules)];
extern mod syntax;
use syntax::ast::{Name, token_tree};
use syntax::codemap::Span;
use syntax::ext::base::*;
use syntax::parse::token;
#[macro_export]
macro_rules! exported_macro (() => (2))
#[macro_registrar]
pub fn macro_registrar(register: |Name, SyntaxExtension|) {
register(token::intern(&"make_a_1"),
NormalTT(@SyntaxExpanderTT {
expander: SyntaxExpanderTTExpanderWithoutContext(expand_make_a_1),
span: None,
} as @SyntaxExpanderTTTrait,
None));
}
pub fn expand_make_a_1(cx: &mut ExtCtxt, sp: Span, tts: &[token_tree]) -> MacResult {
if !tts.is_empty() {
cx.span_fatal(sp, "make_a_1 takes no arguments");
}
MRExpr(quote_expr!(cx, 1i))
}
```
main.rs:
```rust
#[feature(phase)];
#[phase(syntax)]
extern mod my_synext;
fn main() {
assert_eq!(1, make_a_1!());
assert_eq!(2, exported_macro!());
}
```
Overview
=======
Crates that contain syntax extensions need to define a function with the following signature and annotation:
```rust
#[macro_registrar]
pub fn registrar(register: |ast::Name, ext::base::SyntaxExtension|) { ... }
```
that should call the `register` closure with each extension it defines. `macro_rules!` style macros can be tagged with `#[macro_export]` to be exported from the crate as well.
Crates that wish to use externally loadable syntax extensions load them by adding the `#[phase(syntax)]` attribute to an `extern mod`. All extensions registered by the specified crate are loaded with the same scoping rules as `macro_rules!` macros. If you want to use a crate both for syntax extensions and normal linkage, you can use `#[phase(syntax, link)]`.
Open questions
===========
* ~~Does the `macro_crate` syntax make sense? It wraps an entire `extern mod` declaration which looks a bit weird but is nice in the sense that the crate lookup logic can be identical between normal external crates and external macro crates. If the `extern mod` syntax, changes, this will get it for free, etc.~~ Changed to a `phase` attribute.
* ~~Is the magic name `macro_crate_registration` the right way to handle extension registration? It could alternatively be handled by a function annotated with `#[macro_registration]` I guess.~~ Switched to an attribute.
* The crate loading logic lives inside of librustc, which means that the syntax extension infrastructure can't directly access it. I've worked around this by passing a `CrateLoader` trait object from the driver to libsyntax that can call back into the crate loading logic. It should be possible to pull things apart enough that this isn't necessary anymore, but it will be an enormous refactoring project. I think we'll need to create a couple of new libraries: libsynext libmetadata/ty and libmiddle.
* Item decorator extensions can be loaded but the `deriving` decorator itself can't be extended so you'd need to do e.g. `#[deriving_MyTrait] #[deriving(Clone)]` instead of `#[deriving(MyTrait, Clone)]`. Is this something worth bothering with for now?
Remaining work
===========
- [x] ~~There is not yet support for rustdoc downloading and compiling referenced macro crates as it does for other referenced crates. This shouldn't be too hard I think.~~
- [x] ~~This is not testable at stage1 and sketchily testable at stages above that. The stage *n* rustc links against the stage *n-1* libsyntax and librustc. Unfortunately, crates in the test/auxiliary directory link against the stage *n* libstd, libextra, libsyntax, etc. This causes macro crates to fail to properly dynamically link into rustc since names end up being mangled slightly differently. In addition, when rustc is actually installed onto a system, there are actually do copies of libsyntax, libstd, etc: the ones that user code links against and a separate set from the previous stage that rustc itself uses. By this point in the bootstrap process, the two library versions *should probably* be binary compatible, but it doesn't seem like a sure thing. Fixing this is apparently hard, but necessary to properly cross compile as well and is being tracked in #11145.~~ The offending tests are ignored during `check-stage1-rpass` and `check-stage1-cfail`. When we get a snapshot that has this commit, I'll look into how feasible it'll be to get them working on stage1.
- [x] ~~`macro_rules!` style macros aren't being exported. Now that the crate loading infrastructure is there, this should just require serializing the AST of the macros into the crate metadata and yanking them out again, but I'm not very familiar with that part of the compiler.~~
- [x] ~~The `macro_crate_registration` function isn't type-checked when it's loaded. I poked around in the `csearch` infrastructure a bit but didn't find any super obvious ways of checking the type of an item with a certain name. Fixing this may also eliminate the need to `#[no_mangle]` the registration function.~~ Now that the registration function is identified by an attribute, typechecking this will be like typechecking other annotated functions.
- [x] ~~The dynamic libraries that are loaded are never unloaded. It shouldn't require too much work to tie the lifetime of the `DynamicLibrary` object to the `MapChain` that its extensions are loaded into.~~
- [x] ~~The compiler segfaults sometimes when loading external crates. The `DynamicLibrary` reference and code objects from that library are both put into the same hash table. When the table drops, due to the random ordering the library sometimes drops before the objects do. Once #11228 lands it'll be easy to fix this.~~
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
Unique pointers and vectors currently contain a reference counting
header when containing a managed pointer.
This `{ ref_count, type_desc, prev, next }` header is not necessary and
not a sensible foundation for tracing. It adds needless complexity to
library code and is responsible for breakage in places where the branch
has been left out.
The `borrow_offset` field can now be removed from `TyDesc` along with
the associated handling in the compiler.
Closes#9510Closes#11533