This required changing almost all users of hashmaps to import the hashmap interface first.
The `size` member in the hashmap structure was renamed to `count` to work around a name conflict.
This required changing almost all users of hashmaps to import the hashmap interface first.
The `size` member in the hashmap structure was renamed to `count` to work around a name conflict.
Issue #352Closes#1720
The old checker would happily accept things like 'alt x { @some(a) { a } }'.
It now properly descends into patterns, checks exhaustiveness of booleans,
and complains when number/string patterns aren't exhaustive.
Now that core exports "option" as a synonym for option::t, search-and-
replace option::t with option.
The only place that still refers to option::t are the modules in libcore
that use option, because fixing this requires a new snapshot
(forthcoming).
When no built-in interpretation is found for one of the operators
mentioned below, the typechecker will try to turn it into a method
call with the name written next to it. For binary operators, the
method will be called on the LHS with the RHS as only parameter.
Binary:
+ op_add
- op_sub
* op_mul
/ op_div
% op_rem
& op_and
| op_or
^ op_xor
<< op_shift_left
>> op_shift_right
>>> op_ashift_right
Unary:
- op_neg
! op_not
Overloading of the indexing ([]) operator isn't finished yet.
Issue #1520
Although its not really needed. Without that fix, reported spans will
likely be bogus if the error is within the first couple of lines
(probable around 5) of that file. Thus, many of the compile-fail
tests will fail due to incorrect location.
Check that in export foo{}, foo is an enum type, and that in export
foo{bar, quux}, foo is an enum type and bar and quux are variants belonging
to foo.
See issue 1426 for details. Now, the semantics of "export t;" where t is a tag are
to export all of t's variants as well. "export t{};" exports t but not its
variants, while "export t{a, b, c};" exports only variants a, b, c of t.
To do:
- documentation
- there's currently no checking that a, b, c are actually variants of t in the
above example
- there's also no checking that t is an enum type, in the second two examples above
- change the modules listed in issue 1426 that should have the old export
semantics to use the t{} syntax
I deleted the test export-no-tag-variants since we're doing the opposite now,
and other tests cover the same behavior.
Support Lenny222's proposed syntax for exporting a tag without
its variants, or selected tags from a variant, in the AST and parser.
No support further down the line yet. Tests are xfailed.
Previously, typestate would conclude that this function was
correctly diverging:
fn f() -> ! { ret; fail; }
even though it always returns to the caller. It wasn't handling the
i_diverge and i_return bits correctly in the fail case. Fixed it.
Closes#897
Remove disr_val from ast::variant_ and always use ty::variant_info
when the value is needed. Move what was done during parsing into
other passes, primary typeck.rs. This move also correctly type checks
the disr. value expression; thus, fixing rustc --pretty=typed when
disr. values are used.
Addresses issue #1393.
For now disallow disr. values unless all variants use nullary
contractors (i.e. "enum-like").
Disr. values are now encoded in the crate metadata, but only when it
will differ from the inferred value based on the order.
Now, if you have a tag named "foo", a variable declaration like
"let foo..." is illegal. This change makes it possible to eliminate
the '.' after a nullary tag pattern in an alt (but I'll be doing
that in a future commit) -- as now it's always obvious whether a
name refers to a tag or a new declared variable.
resolve implements this change -- all the other changes are just to
get rid of existing code that declares variables that shadow tag
names.
We should probalby warn when defining a method foo on {foo: int} etc.
This should reduce the amount of useless typevars that are allocated.
Issue #1227
I think it should undefined to have multiple modules that link in the same
library, but provide different link arguments. Unfortunately we don't track
link_args by module -- they are just appended as discovered into the crate
store -- but for now, it should be an error to provide link_args on a module
that's already been included (with or without link_args).
Get rid of expr_self_call, introduces def_self. `self` is now,
syntactically, simply a variable. A method implicitly brings a `self`
binding into scope.
Issue #1227
The path information was an optional "filename" component of crate
directive AST. It is now replaced by an attribute with metadata named
"path".
With this commit, a directive
mod foo = "foo.rs";
should be written as:
#[path = "foo.rs"]
mod foo;
Closes issue #906.
It's proving too inflexible, so I'm ripping out the extra complexity
in the hope that regions will, at some point, provide something
similar.
Closes#918
This involved adding 'copy' to more generics than I hoped, but an
experiment with making it implicit showed that that way lies madness --
unless enforced, you will not remember to mark functions that don't
copy as not requiring copyable kind.
Issue #1177
This goes before a snapshot, so that subsequenct patches can make the
transition without breaking the build. Disables kind checking pass, makes
parser accept both new and old-style kind annotation.
Issue #1177
This patch changes how to specify ABI and link name of a native module.
Before:
native "cdecl" mod llvm = "rustllvm" {...}
After:
#[abi = "cdecl"]
#[link_name = "rustllvm"]
native mod llvm {...}
The old optional syntax for ABI and link name is no longer supported.
Fixes issue #547
It now threads information about invalidated aliases through the AST
properly. This makes it more permissive for conditionals (invalidating
an alias in one branch doesn't prevent you from using it in another),
and less permissive for loops (it now properly notices when a loop
invalidates an alias that it might still use in another iteration).
Closes#1144
Patch to error and fail instead of using all available memory
then crashing to detect the error condition of an unmatched
double quote before the end of a file.
I couldn't get it to show nice error messages, so this may not be
the ideal fix.
A test case for this situation has also been added.
So *resource, ~resource, [resource] are all pinned. This is counter to the
design of the kind system, but this way is a much clearer path to type safety.
Once we've established a good baseline with lots of tests, then we can try to
make raising pinned kinds work.
We were only using it in a single place, and there for no discernable reason
(probably as part of the bare-fn-vals-are-not-copyable plan). It seems more
surprising than useful.
It is now 1-based, rather than 0 based. (Seems more natural, and allows 0 to
be used to refer to self and maybe to closure.)
Also allows non-referenced args to be implicitly copied again.
Issue #918
Blocks (or statements involving blocks) that end in a semicolon are no
longer considered the block-expression of their outer block. This used
to be an expression block, but now is a statement block:
{ if foo { ret 1; } else { ret 10; } }
This helps clear up some ambiguities in our grammar.
Upvars are now marked with def_upvar throughout, not just when going
through freevars::lookup_def. This makes things less error-prone. One
thing to watch out for is that def_upvar is used in `for each` bodies
too, when they refer to a local outside the body.
Having it in the alias pass was slightly more efficient (finding
expression roots has to be done in both passes), but further muddled
up the already complex alias checker.
Also factors out some duplication in the mutability-checking code.
Closes#868. Unfortunately, this causes certain invalid programs to
fail type-checking instead of failing type-state when a type-state
error message would probably be more intuitive. (Although, by any
reasonable interpretation of the static semantics, it technically
ought to be a type error.)
Autoderef on binops is basically unused, kind of silly, and
complicates typechecking. There were only three instances of it in the
compiler and the test drivers, two of which were of the form "*foo =
foo + 1", which should be written as "*foo += 1" anyways.
I tried to pay attention to what was actually being tested so, e.g. when I
test was just using a vec as a boxed thing, I converted to boxed ints, etc.
Haven't converted the macro tests yet. Not sure what to do there.
Previously, typestate was initializing the init constraint for
a declared-but-not-initialized variable (like x in "let x;") to False,
but other constraints to Don't-know. This led to over-lenient results
when a variable was used before declaration (see the included test
case). Now, everything gets initialized to False in the prestate/poststate-
finding phase, and Don't-know should only be used in pre/postconditions.
This aspect of the algorithm really needs formalization (just on paper),
but for now, this closes#700
In the writeback phase, the typechecker now checks that it isn't
replacing a type variable T with a type that contains T. It
also does an occurs check in do_autoderef in order to avoid
getting into an infinite chain of derefs.
I'm a bit worried that there are more places where the occurs
check needs to happen where I'm not doing it now, though.
Closes#768
While it is still technically possible to test stage 0, it is not part of any
of the main testing rules and maintaining xfail-stage0 is a chore. Nobody
should worry about how tests fare in stage0.
The logic for how the "returns" constraint was handled was always
dodgy, for reasons explained in the comments I added to
auxiliary::fn_info in this commit. Fixed it by adding distinct
"returns" and "diverges" constraints for each function, which
are both handled positively (that is: for a ! function, the
"diverges" constraint must be true on every exit path; for
any other function, the "returns" constraint must be true
on every exit path).
Closes#779
This was previously disallowed by the typechecker and not properly handled
in trans. I removed the typechecker check (replacing it with a simpler
check that spawned functions don't have type params) and fixed trans.
Closes#756.
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
Programs with constrained types now parse and typecheck, but
typestate doesn't check them specially, so the one relevant test
case so far is XFAILed.
Also rewrote all of the constraint-related data structures in the
process (again), for some reason. I got rid of a superfluous
data structure in the context that was mapping front-end constraints
to resolved constraints, instead handling constraints in the same
way in which everything else gets resolved.