For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
cc #11119
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
too.
Previously I had omitted this case since function calls don't get the same
treatment on the RHS, but it's different on the pattern and is more consistent
-- the goal is to identify `let` statements where `ref` bindings create
interior pointers.
This is a first pass on support for procedural macros that aren't hardcoded into libsyntax. It is **not yet ready to merge** but I've opened a PR to have a chance to discuss some open questions and implementation issues.
Example
=======
Here's a silly example showing off the basics:
my_synext.rs
```rust
#[feature(managed_boxes, globs, macro_registrar, macro_rules)];
extern mod syntax;
use syntax::ast::{Name, token_tree};
use syntax::codemap::Span;
use syntax::ext::base::*;
use syntax::parse::token;
#[macro_export]
macro_rules! exported_macro (() => (2))
#[macro_registrar]
pub fn macro_registrar(register: |Name, SyntaxExtension|) {
register(token::intern(&"make_a_1"),
NormalTT(@SyntaxExpanderTT {
expander: SyntaxExpanderTTExpanderWithoutContext(expand_make_a_1),
span: None,
} as @SyntaxExpanderTTTrait,
None));
}
pub fn expand_make_a_1(cx: &mut ExtCtxt, sp: Span, tts: &[token_tree]) -> MacResult {
if !tts.is_empty() {
cx.span_fatal(sp, "make_a_1 takes no arguments");
}
MRExpr(quote_expr!(cx, 1i))
}
```
main.rs:
```rust
#[feature(phase)];
#[phase(syntax)]
extern mod my_synext;
fn main() {
assert_eq!(1, make_a_1!());
assert_eq!(2, exported_macro!());
}
```
Overview
=======
Crates that contain syntax extensions need to define a function with the following signature and annotation:
```rust
#[macro_registrar]
pub fn registrar(register: |ast::Name, ext::base::SyntaxExtension|) { ... }
```
that should call the `register` closure with each extension it defines. `macro_rules!` style macros can be tagged with `#[macro_export]` to be exported from the crate as well.
Crates that wish to use externally loadable syntax extensions load them by adding the `#[phase(syntax)]` attribute to an `extern mod`. All extensions registered by the specified crate are loaded with the same scoping rules as `macro_rules!` macros. If you want to use a crate both for syntax extensions and normal linkage, you can use `#[phase(syntax, link)]`.
Open questions
===========
* ~~Does the `macro_crate` syntax make sense? It wraps an entire `extern mod` declaration which looks a bit weird but is nice in the sense that the crate lookup logic can be identical between normal external crates and external macro crates. If the `extern mod` syntax, changes, this will get it for free, etc.~~ Changed to a `phase` attribute.
* ~~Is the magic name `macro_crate_registration` the right way to handle extension registration? It could alternatively be handled by a function annotated with `#[macro_registration]` I guess.~~ Switched to an attribute.
* The crate loading logic lives inside of librustc, which means that the syntax extension infrastructure can't directly access it. I've worked around this by passing a `CrateLoader` trait object from the driver to libsyntax that can call back into the crate loading logic. It should be possible to pull things apart enough that this isn't necessary anymore, but it will be an enormous refactoring project. I think we'll need to create a couple of new libraries: libsynext libmetadata/ty and libmiddle.
* Item decorator extensions can be loaded but the `deriving` decorator itself can't be extended so you'd need to do e.g. `#[deriving_MyTrait] #[deriving(Clone)]` instead of `#[deriving(MyTrait, Clone)]`. Is this something worth bothering with for now?
Remaining work
===========
- [x] ~~There is not yet support for rustdoc downloading and compiling referenced macro crates as it does for other referenced crates. This shouldn't be too hard I think.~~
- [x] ~~This is not testable at stage1 and sketchily testable at stages above that. The stage *n* rustc links against the stage *n-1* libsyntax and librustc. Unfortunately, crates in the test/auxiliary directory link against the stage *n* libstd, libextra, libsyntax, etc. This causes macro crates to fail to properly dynamically link into rustc since names end up being mangled slightly differently. In addition, when rustc is actually installed onto a system, there are actually do copies of libsyntax, libstd, etc: the ones that user code links against and a separate set from the previous stage that rustc itself uses. By this point in the bootstrap process, the two library versions *should probably* be binary compatible, but it doesn't seem like a sure thing. Fixing this is apparently hard, but necessary to properly cross compile as well and is being tracked in #11145.~~ The offending tests are ignored during `check-stage1-rpass` and `check-stage1-cfail`. When we get a snapshot that has this commit, I'll look into how feasible it'll be to get them working on stage1.
- [x] ~~`macro_rules!` style macros aren't being exported. Now that the crate loading infrastructure is there, this should just require serializing the AST of the macros into the crate metadata and yanking them out again, but I'm not very familiar with that part of the compiler.~~
- [x] ~~The `macro_crate_registration` function isn't type-checked when it's loaded. I poked around in the `csearch` infrastructure a bit but didn't find any super obvious ways of checking the type of an item with a certain name. Fixing this may also eliminate the need to `#[no_mangle]` the registration function.~~ Now that the registration function is identified by an attribute, typechecking this will be like typechecking other annotated functions.
- [x] ~~The dynamic libraries that are loaded are never unloaded. It shouldn't require too much work to tie the lifetime of the `DynamicLibrary` object to the `MapChain` that its extensions are loaded into.~~
- [x] ~~The compiler segfaults sometimes when loading external crates. The `DynamicLibrary` reference and code objects from that library are both put into the same hash table. When the table drops, due to the random ordering the library sometimes drops before the objects do. Once #11228 lands it'll be easy to fix this.~~
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
Unique pointers and vectors currently contain a reference counting
header when containing a managed pointer.
This `{ ref_count, type_desc, prev, next }` header is not necessary and
not a sensible foundation for tracing. It adds needless complexity to
library code and is responsible for breakage in places where the branch
has been left out.
The `borrow_offset` field can now be removed from `TyDesc` along with
the associated handling in the compiler.
Closes#9510Closes#11533
Unique pointers and vectors currently contain a reference counting
header when containing a managed pointer.
This `{ ref_count, type_desc, prev, next }` header is not necessary and
not a sensible foundation for tracing. It adds needless complexity to
library code and is responsible for breakage in places where the branch
has been left out.
The `borrow_offset` field can now be removed from `TyDesc` along with
the associated handling in the compiler.
Closes#9510Closes#11533
This is a patch for #8005, thanks @lfairy for the hint.
It seems like `block.expr` is None, if the last line of a function has a semi colon (= it ends with a statement).
@kmcallister does this error message cover the intended use cases?
I'm not sure about the message, the wording and the span could probably be improved.
Unsuffixed literals like 1 and 1.1, and free type parameters sometimes
have to be printed in error messages, which ended up with <V0>, <VI0>
and <VF0>. This change puts the words "generic" and "integer"/"float"
into the message so it's not a completely black box.
Dead code pass now explicitly checks for `#[allow(dead_code)]` and
`#[lang=".."]` attributes on items and marks them as live if they have
those attributes. The former is done so that if we want to suppress
warnings for a group of dead functions, we only have to annotate the
"root" of the call chain.
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
So far the following code
```
struct Foo;
fn main() {
let mut t = Foo;
let ref b = Foo;
a += *b;
}
```
errors with
```
test.rs:15:3: 13:11 error: binary operation + cannot be applied to type `Foo`
test.rs:15 *a += *b;
```
Since assignment-operators are no longer expanded to ```left = left OP right``` but are independents operators it should be
```
test.rs:15:3: 13:11 error: binary operation += cannot be applied to type `Foo`
test.rs:15 *a += *b;
```
to make it clear that implementing Add for Foo is not gonna work. (cf issues #11143, #11344)
Besides that, we also need to typecheck the rhs expression even if the operator has no implementation, or we end up with unknown types for the nodes of the rhs and an ICE later on while resolving types. (once again cf #11143 and #11344).
This probably would get fixed with #5992, but in the meantime it's a confusing error to stumble upon.
@pcwalton, you wrote the original code, what do you think?
(closes#11143 and #11344)
That is, if you have an enum type that is subject to the nullable
pointer optimization, but the null variant has a nonzero number of
fields, and you declare a static whose value is of that variant, then
that used to be an ICE but this change fixes it.
That is, if you have an enum type that is subject to the nullable
pointer optimization, but the null variant has a nonzero number of
fields, and you declare a static whose value is of that variant, then
that used to be an ICE but this change fixes it.
This is just an unnecessary trait that no one's ever going to parameterize over
and it's more useful to just define the methods directly on the types
themselves. The implementors of this type almost always don't want
inner_mut_ref() but they're forced to define it as well.
So, like I mentioned in issue #10955 it doesn't seem like we need to call ```ty::subst_tps``` when the method is generic. But then I realized that this function doesn't mutate any of its input, and the return value is unused. Plus the type param substitution seems to be taken care of in ```trans_fn_ref_with_vtables```, so I thought I'd just try to remove it. As far as I can tell everything works.
This closes#10955.
This is just an unnecessary trait that no one's ever going to parameterize over
and it's more useful to just define the methods directly on the types
themselves. The implementors of this type almost always don't want
inner_mut_ref() but they're forced to define it as well.
If a reexport comes from a non-public module, then the documentation for the
reexport will be inlined into the module that exports it, but if the reexport is
targeted at a public type (like the prelude), then it is not inlined but rather
hyperlinked.
If a reexport comes from a non-public module, then the documentation for the
reexport will be inlined into the module that exports it, but if the reexport is
targeted at a public type (like the prelude), then it is not inlined but rather
hyperlinked.
This pull request fixes#11083. The problem was that recursive type definitions were not properly handled for enum types, leading to problems with LLVM's metadata "uniquing". This bug has already been fixed for struct types some time ago (#9658) but I seem to have forgotten about enums back then. I added the offending code from issue #11083 as a test case.
The resulting symbol names aren't very pretty at all:
trait Trait { fn method(&self); }
impl<'a> Trait for ~[(&'a int, fn())] { fn method(&self) {} }
gives
Trait$$UP$$VEC$$TUP_2$$BP$int$$FN$$::method::...hash...::v0.0
However, at least it contain some reference to the Self type, unlike
`Trait$__extensions__::method:...`, which is what the symbol name used
to be for anything other than `impl Trait for foo::bar::Baz` (which
became, and still becomes, `Trait$Baz::method`).
The resulting symbol names aren't very pretty at all:
trait Trait { fn method(&self); }
impl<'a> Trait for ~[(&'a int, fn())] { fn method(&self) {} }
gives
Trait$$UP$$VEC$$TUP_2$$BP$int$$FN$$::method::...hash...::v0.0
However, at least it contain some reference to the Self type, unlike
`Trait$__extensions__::method:...`, which is what the symbol name used
to be for anything other than `impl Trait for foo::bar::Baz` (which
became, and still becomes, `Trait$Baz::method`).
Right now if you have concurrent builds of two libraries in the same directory
(such as rustc's bootstrapping process), it's possible that two libraries will
stomp over each others' metadata, producing corrupt rlibs.
By placing the metadata file in a tempdir we're guranteed to not conflict with
ay other builds happening concurrently. Normally this isn't a problem because
output filenames are scoped to the name of the crate, but metadata is special in
that it has the same name across all crates.
This PR adds `std::unsafe::intrinsics::{volatile_load,volatile_store}`, which map to LLVM's `load volatile` and `store volatile` operations correspondingly.
This would fix#11172.
I have addressed several uncertainties with this PR in the line comments.
The comments have more information as to why this is done, but the basic idea is
that finding an exported trait is actually a fairly difficult problem. The true
answer lies in whether a trait is ever referenced from another exported method,
and right now this kind of analysis doesn't exist, so the conservative answer of
"yes" is always returned to answer whether a trait is exported.
Closes#11224Closes#11225
Right now if you have concurrent builds of two libraries in the same directory
(such as rustc's bootstrapping process), it's possible that two libraries will
stomp over each others' metadata, producing corrupt rlibs.
By placing the metadata file in a tempdir we're guranteed to not conflict with
ay other builds happening concurrently. Normally this isn't a problem because
output filenames are scoped to the name of the crate, but metadata is special in
that it has the same name across all crates.