After sitting down to build on the work merged in #14318, I realized that some of the test names were not clear, others probably weren't testing the right thing, and they were also not as exhaustive as they could have been.
Instead of calling a borrow() function that takes a pointer type, just
create a local pointer and dereference it. The dereference is there to
outsmart any future liveness analysis in borrowck.
The move_after_borrow / fu_move_after_borrow tests in
run-pass/borrowck-field-sensitivity.rs are not testing the right thing,
since the scope of the borrow is limited to the call to borrow(). When
fixed, these tests fail and thus should be moved to the corresponding
compile-fail test file.
A number of borrowck field-sensitivity tests perform more moves and
copies than their naming scheme would indicate. This is only necessary
for borrowed pointers (to ensure that the borrowws stay alive in the
near future when borrow liveness is tracked), but all other test
functions should be changed to match their name more closely.
Some of the borrowck field-sensitivity test functions have 'use' in
their name, but they don't refer to the specific kind of use (whether a
copy or a deref). It would be better if the name more precisely
reflected what the function is testing.
All rust functions are internal implementation details with respect to the ABI
exposed by crates, but extern fns are public components of the ABI and shouldn't
be stripped. This commit serializes reachable extern fns to metadata, so when
LTO is performed all of their symbols are not stripped.
Closes#14500
This commit carries out the request from issue #14678:
> The method `Iterator::len()` is surprising, as all the other uses of
> `len()` do not consume the value. `len()` would make more sense to be
> called `count()`, but that would collide with the current
> `Iterator::count(|T| -> bool) -> unit` method. That method, however, is
> a bit redundant, and can be easily replaced with
> `iter.filter(|x| x < 5).count()`.
> After this change, we could then define the `len()` method
> on `iter::ExactSize`.
Closes#14678.
[breaking-change]
Division and remainder by 0 are undefined behavior, and are detected at runtime.
This commit adds support for ensuring that MIN / -1 is also checked for at
runtime, as this would cause signed overflow, or undefined behvaior.
Closes#8460
I tried to split up the less mechanical changes into separate commits so they are easier to review. One thing I'm not quite sure of is whether `MoveReason` should just be replaced with `move_data::MoveKind`.
When converting check_loans to use ExprUseVisitor I encountered a few
issues where the wrong number of errors were being emitted for multiple
closure captures, but there is no existing test for this.
part of #14248, fix#14420
Removed @richo's contribution (outdated comment)
Quoting @brson: let's move forward with this one. The only
statement I'm missing is @richo's and it sounds like his was a
minor patch.
As with the previous commit with `librand`, this commit shuffles around some
`collections` code. The new state of the world is similar to that of librand:
* The libcollections crate now only depends on libcore and liballoc.
* The standard library has a new module, `std::collections`. All functionality
of libcollections is reexported through this module.
I would like to stress that this change is purely cosmetic. There are very few
alterations to these primitives.
There are a number of notable points about the new organization:
* std::{str, slice, string, vec} all moved to libcollections. There is no reason
that these primitives shouldn't be necessarily usable in a freestanding
context that has allocation. These are all reexported in their usual places in
the standard library.
* The `hashmap`, and transitively the `lru_cache`, modules no longer reside in
`libcollections`, but rather in libstd. The reason for this is because the
`HashMap::new` contructor requires access to the OSRng for initially seeding
the hash map. Beyond this requirement, there is no reason that the hashmap
could not move to libcollections.
I do, however, have a plan to move the hash map to the collections module. The
`HashMap::new` function could be altered to require that the `H` hasher
parameter ascribe to the `Default` trait, allowing the entire `hashmap` module
to live in libcollections. The key idea would be that the default hasher would
be different in libstd. Something along the lines of:
// src/libstd/collections/mod.rs
pub type HashMap<K, V, H = RandomizedSipHasher> =
core_collections::HashMap<K, V, H>;
This is not possible today because you cannot invoke static methods through
type aliases. If we modified the compiler, however, to allow invocation of
static methods through type aliases, then this type definition would
essentially be switching the default hasher from `SipHasher` in libcollections
to a libstd-defined `RandomizedSipHasher` type. This type's `Default`
implementation would randomly seed the `SipHasher` instance, and otherwise
perform the same as `SipHasher`.
This future state doesn't seem incredibly far off, but until that time comes,
the hashmap module will live in libstd to not compromise on functionality.
* In preparation for the hashmap moving to libcollections, the `hash` module has
moved from libstd to libcollections. A previously snapshotted commit enables a
distinct `Writer` trait to live in the `hash` module which `Hash`
implementations are now parameterized over.
Due to using a custom trait, the `SipHasher` implementation has lost its
specialized methods for writing integers. These can be re-added
backwards-compatibly in the future via default methods if necessary, but the
FNV hashing should satisfy much of the need for speedier hashing.
A list of breaking changes:
* HashMap::{get, get_mut} no longer fails with the key formatted into the error
message with `{:?}`, instead, a generic message is printed. With backtraces,
it should still be not-too-hard to track down errors.
* The HashMap, HashSet, and LruCache types are now available through
std::collections instead of the collections crate.
* Manual implementations of hash should be parameterized over `hash::Writer`
instead of just `Writer`.
[breaking-change]
This fix suppresses dead_code warnings from code generated by regex! when
the result of regex! is unused. Correct behavior should be a single
unused variable warning.
Regression tests are included for both `let` and `static` bound regex!
values.
see #14185
A few notable improvements were implemented to cut down on the number of aborts
triggered by the standard library when a local task is not found.
* Primarily, the unwinding functionality was restructured to support an unsafe
top-level function, `try`. This function invokes a closure, capturing any
failure which occurs inside of it. The purpose of this function is to be as
lightweight of a "try block" as possible for rust, intended for use when the
runtime is difficult to set up.
This function is *not* meant to be used by normal rust code, nor should it be
consider for use with normal rust code.
* When invoking spawn(), a `fail!()` is triggered rather than an abort.
* When invoking LocalIo::borrow(), which is transitively called by all I/O
constructors, None is returned rather than aborting to indicate that there is
no local I/O implementation.
A test case was also added showing the variety of things that you can do without
a runtime or task set up now. In general, this is just a refactoring to abort
less quickly in the standard library when a local task is not found.
part of #14248, fix#14420
Removed @richo's contribution (outdated comment)
Quoting @brson: let's move forward with this one. The only
statement I'm missing is @richo's and it sounds like his was a
minor patch.
This fix suppresses dead_code warnings from code generated by regex! when
the result of regex! is unused. Correct behavior should be a single
unused variable warning.
Regression tests are included for both `let` and `static` bound regex!
values.
A few notable improvements were implemented to cut down on the number of aborts
triggered by the standard library when a local task is not found.
* Primarily, the unwinding functionality was restructured to support an unsafe
top-level function, `try`. This function invokes a closure, capturing any
failure which occurs inside of it. The purpose of this function is to be as
lightweight of a "try block" as possible for rust, intended for use when the
runtime is difficult to set up.
This function is *not* meant to be used by normal rust code, nor should it be
consider for use with normal rust code.
* When invoking spawn(), a `fail!()` is triggered rather than an abort.
* When invoking LocalIo::borrow(), which is transitively called by all I/O
constructors, None is returned rather than aborting to indicate that there is
no local I/O implementation.
* Invoking get() on a TLD key will return None if no task is available
* Invoking replace() on a TLD key will fail if no task is available.
A test case was also added showing the variety of things that you can do without
a runtime or task set up now. In general, this is just a refactoring to abort
less quickly in the standard library when a local task is not found.
This completes the last stage of the renaming of the comparison hierarchy of
traits. This change renames TotalEq to Eq and TotalOrd to Ord.
In the future the new Eq/Ord will be filled out with their appropriate methods,
but for now this change is purely a renaming change.
[breaking-change]
This is part of the ongoing renaming of the equality traits. See #12517 for more
details. All code using Eq/Ord will temporarily need to move to Partial{Eq,Ord}
or the Total{Eq,Ord} traits. The Total traits will soon be renamed to {Eq,Ord}.
cc #12517
[breaking-change]
Make check_for_assignment_to_restricted_or_frozen_location treat
mutation through an owning pointer the same way it treats mutation
through an &mut pointer, where mutability must be inherited from the
base path.
I also included GC pointers in this check, as that is what the
corresponding code in gather_loans/restrictions.rs does, but I don't
think there is a way to test this with the current language.
Fixes#14498.
A number of functions/methods have been moved or renamed to align
better with rust standard conventions.
serialize::ebml::reader::Doc => seriaize::ebml::Doc::new
serialize::ebml::reader::Decoder => Decoder::new
serialize::ebml::writer::Encoder => Encoder::new
[breaking-change]
Make check_for_assignment_to_restricted_or_frozen_location treat
mutation through an owning pointer the same way it treats mutation
through an &mut pointer, where mutability must be inherited from the
base path.
I also included GC pointers in this check, as that is what the
corresponding code in gather_loans/restrictions.rs does, but I don't
think there is a way to test this with the current language.
Fixes#14498.
This commit shuffles around some of the `rand` code, along with some
reorganization. The new state of the world is as follows:
* The librand crate now only depends on libcore. This interface is experimental.
* The standard library has a new module, `std::rand`. This interface will
eventually become stable.
Unfortunately, this entailed more of a breaking change than just shuffling some
names around. The following breaking changes were made to the rand library:
* Rng::gen_vec() was removed. This has been replaced with Rng::gen_iter() which
will return an infinite stream of random values. Previous behavior can be
regained with `rng.gen_iter().take(n).collect()`
* Rng::gen_ascii_str() was removed. This has been replaced with
Rng::gen_ascii_chars() which will return an infinite stream of random ascii
characters. Similarly to gen_iter(), previous behavior can be emulated with
`rng.gen_ascii_chars().take(n).collect()`
* {IsaacRng, Isaac64Rng, XorShiftRng}::new() have all been removed. These all
relied on being able to use an OSRng for seeding, but this is no longer
available in librand (where these types are defined). To retain the same
functionality, these types now implement the `Rand` trait so they can be
generated with a random seed from another random number generator. This allows
the stdlib to use an OSRng to create seeded instances of these RNGs.
* Rand implementations for `Box<T>` and `@T` were removed. These seemed to be
pretty rare in the codebase, and it allows for libcore to not depend on
liballoc. Additionally, other pointer types like Rc<T> and Arc<T> were not
supported. If this is undesirable, librand can depend on liballoc and regain
these implementations.
* The WeightedChoice structure is no longer built with a `Vec<Weighted<T>>`,
but rather a `&mut [Weighted<T>]`. This means that the WeightedChoice
structure now has a lifetime associated with it.
cc #13851
[breaking-change]
This commit shuffles around some of the `rand` code, along with some
reorganization. The new state of the world is as follows:
* The librand crate now only depends on libcore. This interface is experimental.
* The standard library has a new module, `std::rand`. This interface will
eventually become stable.
Unfortunately, this entailed more of a breaking change than just shuffling some
names around. The following breaking changes were made to the rand library:
* Rng::gen_vec() was removed. This has been replaced with Rng::gen_iter() which
will return an infinite stream of random values. Previous behavior can be
regained with `rng.gen_iter().take(n).collect()`
* Rng::gen_ascii_str() was removed. This has been replaced with
Rng::gen_ascii_chars() which will return an infinite stream of random ascii
characters. Similarly to gen_iter(), previous behavior can be emulated with
`rng.gen_ascii_chars().take(n).collect()`
* {IsaacRng, Isaac64Rng, XorShiftRng}::new() have all been removed. These all
relied on being able to use an OSRng for seeding, but this is no longer
available in librand (where these types are defined). To retain the same
functionality, these types now implement the `Rand` trait so they can be
generated with a random seed from another random number generator. This allows
the stdlib to use an OSRng to create seeded instances of these RNGs.
* Rand implementations for `Box<T>` and `@T` were removed. These seemed to be
pretty rare in the codebase, and it allows for librand to not depend on
liballoc. Additionally, other pointer types like Rc<T> and Arc<T> were not
supported. If this is undesirable, librand can depend on liballoc and regain
these implementations.
* The WeightedChoice structure is no longer built with a `Vec<Weighted<T>>`,
but rather a `&mut [Weighted<T>]`. This means that the WeightedChoice
structure now has a lifetime associated with it.
* The `sample` method on `Rng` has been moved to a top-level function in the
`rand` module due to its dependence on `Vec`.
cc #13851
[breaking-change]
So far the DWARF information for enums was different
for regular enums, univariant enums, Option-like enums,
etc. Regular enums were encoded as unions of structs,
while the other variants were encoded as bare structs.
With the changes in this PR all enums are encoded as
unions so that debuggers can reconstruct if something
originally was a struct, a univariant enum, or an
Option-like enum. For the latter case, information
about the Null variant is encoded into the union field
name. This information can then be used by the
debugger to print a None value actually as None
instead of Some(0x0).
When spawning a process, stdio file descriptors can be configured to be ignored,
which basically means that they'll be closed. Currently this is done by
literally closing the file descriptors in the child, but this can have adverse
side effects if the child process then opens a new file descriptor, assigning it
to a stdio number.
To work around the problems of the child, this commit alters the process
spawning code to map stdio fds to /dev/null on unix (and a similar equivalent on
windows) when they are specified as being ignored. This should allow spawned
programs to have more expected behavior when opening new files.
Closes#14456
This commit moves reflection (as well as the {:?} format modifier) to a new
libdebug crate, all of which is marked experimental.
This is a breaking change because it now requires the debug crate to be
explicitly linked if the :? format qualifier is used. This means that any code
using this feature will have to add `extern crate debug;` to the top of the
crate. Any code relying on reflection will also need to do this.
Closes#12019
[breaking-change]
When spawning a process, stdio file descriptors can be configured to be ignored,
which basically means that they'll be closed. Currently this is done by
literally closing the file descriptors in the child, but this can have adverse
side effects if the child process then opens a new file descriptor, assigning it
to a stdio number.
To work around the problems of the child, this commit alters the process
spawning code to map stdio fds to /dev/null on unix (and a similar equivalent on
windows) when they are specified as being ignored. This should allow spawned
programs to have more expected behavior when opening new files.
Closes#14456
Change `for` desugaring & make refutable pattern errors more precise
This changes for to desugar to the `let`-based pattern match as described in #14390, and adjusts the compiler to use this information for error messages that even mention that it's in a `for` loop.
Also, it makes the compiler record the exact positions of refutable parts of a pattern, to point to exactly them in error messages.
This ensures that a public typedef to a private item is ensured to be public in
terms of linkage. This affects both the visibility of the library's symbols as
well as other lints based on privacy (dead_code for example).
Closes#14421Closes#14422
This ensures that a public typedef to a private item is ensured to be public in
terms of linkage. This affects both the visibility of the library's symbols as
well as other lints based on privacy (dead_code for example).
Closes#14421Closes#14422
messages when the pattern is refutable.
This means the compiler points directly to the pattern and said that the
problem is the pattern being refutable (rather than just saying that
some value isn't covered in the `match` as it did previously).
Fixes#14390.
There's a fair number of attributes that have to be whitelisted since
they're either looked for by rustdoc, in trans, or as needed. These can
be cleaned up in the future.
* All of the *_val functions have gone from #[unstable] to #[stable]
* The overwrite and zeroed functions have gone from #[unstable] to #[stable]
* The uninit function is now deprecated, replaced by its stable counterpart,
uninitialized
[breaking-change]
* All of the *_val functions have gone from #[unstable] to #[stable]
* The overwrite and zeroed functions have gone from #[unstable] to #[stable]
* The uninit function is now deprecated, replaced by its stable counterpart,
uninitialized
[breaking-change]
All of these features have been obsolete since February 2014, where most have
been obsolete since 2013. There shouldn't be any more need to keep around the
parser hacks after this length of time.
It can be easy to accidentally bloat the size of an enum by making one variant
larger than the others. When this happens, it usually goes unnoticed. This
commit adds a lint that can warn when the largest variant in an enum is more
than 3 times larger than the second-largest variant. This requires a little
bit of rejiggering, because size information is only available in trans, but
lint levels are only available in the lint context.
It is allow by default because it's pretty noisy, and isn't really *that*
undesirable.
Closes#10362
Excluding the functions inherited from the cast module last week (with marked
stability levels), these functions received the following treatment.
* size_of - this method has become #[stable]
* nonzero_size_of/nonzero_size_of_val - these methods have been removed
* min_align_of - this method is now #[stable]
* pref_align_of - this method has been renamed without the
`pref_` prefix, and it is the "default alignment" now. This decision is in line
with what clang does (see url linked in comment on function). This function
is now #[stable].
* init - renamed to zeroed and marked #[stable]
* uninit - marked #[stable]
* move_val_init - renamed to overwrite and marked #[stable]
* {from,to}_{be,le}{16,32,64} - all functions marked #[stable]
* swap/replace/drop - marked #[stable]
* size_of_val/min_align_of_val/align_of_val - these functions are marked
#[unstable], but will continue to exist in some form. Concerns have been
raised about their `_val` prefix.
Excluding the functions inherited from the cast module last week (with marked
stability levels), these functions received the following treatment.
* size_of - this method has become #[stable]
* nonzero_size_of/nonzero_size_of_val - these methods have been removed
* min_align_of - this method is now #[stable]
* pref_align_of - this method has been renamed without the
`pref_` prefix, and it is the "default alignment" now. This decision is in line
with what clang does (see url linked in comment on function). This function
is now #[stable].
* init - renamed to zeroed and marked #[stable]
* uninit - marked #[stable]
* move_val_init - renamed to overwrite and marked #[stable]
* {from,to}_{be,le}{16,32,64} - all functions marked #[stable]
* swap/replace/drop - marked #[stable]
* size_of_val/min_align_of_val/align_of_val - these functions are marked
#[unstable], but will continue to exist in some form. Concerns have been
raised about their `_val` prefix.
[breaking-change]
This commit is part of the ongoing libstd facade efforts (cc #13851). The
compiler now recognizes some language items as "extern { fn foo(...); }" and
will automatically perform the following actions:
1. The foreign function has a pre-defined name.
2. The crate and downstream crates can only be built as rlibs until a crate
defines the lang item itself.
3. The actual lang item has a pre-defined name.
This is essentially nicer compiler support for the hokey
core-depends-on-std-failure scheme today, but it is implemented the same way.
The details are a little more hidden under the covers.
In addition to failure, this commit promotes the eh_personality and
rust_stack_exhausted functions to official lang items. The compiler can generate
calls to these functions, causing linkage errors if they are left undefined. The
checking for these items is not as precise as it could be. Crates compiling with
`-Z no-landing-pads` will not need the eh_personality lang item, and crates
compiling with no split stacks won't need the stack exhausted lang item. For
ease, however, these items are checked for presence in all final outputs of the
compiler.
It is quite easy to define dummy versions of the functions necessary:
#[lang = "stack_exhausted"]
extern fn stack_exhausted() { /* ... */ }
#[lang = "eh_personality"]
extern fn eh_personality() { /* ... */ }
cc #11922, rust_stack_exhausted is now a lang item
cc #13851, libcollections is blocked on eh_personality becoming weak
Tweak region inference to ignore constraints like `'a <= 'static`, since they
have no value. This also ensures that we can handle some obscure cases of fn
subtyping with bound regions that we didn't used to handle correctly.
Fixes#13974.
This has no tests because it's near impossible to test -- since TestFn uses
`proc`s, they can not be cloned or tested for equality. The only way to really
test this is making sure that for a given number of shards `a`, sharding from
1 to `a` yields the complete set of tests. But `filter_tests` takes its vector
by value and `proc`s cannot be compared.
[breaking-change]
Closes#10898
This has no tests because it's near impossible to test -- since TestFn uses
`proc`s, they can not be cloned or tested for equality. The only way to really
test this is making sure that for a given number of shards `a`, sharding from
1 to `a` yields the complete set of tests. But `filter_tests` takes its vector
by value and `proc`s cannot be compared.
[breaking-change]
Closes#10898
This commit is part of the ongoing libstd facade efforts (cc #13851). The
compiler now recognizes some language items as "extern { fn foo(...); }" and
will automatically perform the following actions:
1. The foreign function has a pre-defined name.
2. The crate and downstream crates can only be built as rlibs until a crate
defines the lang item itself.
3. The actual lang item has a pre-defined name.
This is essentially nicer compiler support for the hokey
core-depends-on-std-failure scheme today, but it is implemented the same way.
The details are a little more hidden under the covers.
In addition to failure, this commit promotes the eh_personality and
rust_stack_exhausted functions to official lang items. The compiler can generate
calls to these functions, causing linkage errors if they are left undefined. The
checking for these items is not as precise as it could be. Crates compiling with
`-Z no-landing-pads` will not need the eh_personality lang item, and crates
compiling with no split stacks won't need the stack exhausted lang item. For
ease, however, these items are checked for presence in all final outputs of the
compiler.
It is quite easy to define dummy versions of the functions necessary:
#[lang = "stack_exhausted"]
extern fn stack_exhausted() { /* ... */ }
#[lang = "eh_personality"]
extern fn eh_personality() { /* ... */ }
cc #11922, rust_stack_exhausted is now a lang item
cc #13851, libcollections is blocked on eh_personality becoming weak
This is an implementation of RFC 16. A module can now only be loaded if the
module declaring `mod name;` "owns" the current directory. A module is
considered as owning its directory if it meets one of the following criteria:
* It is the top-level crate file
* It is a `mod.rs` file
* It was loaded via `#[path]`
* It was loaded via `include!`
* The module was declared via an inline `mod foo { ... }` statement
For example, this directory structure is now invalid
// lib.rs
mod foo;
// foo.rs
mod bar;
// bar.rs;
fn bar() {}
With this change `foo.rs` must be renamed to `foo/mod.rs`, and `bar.rs` must be
renamed to `foo/bar.rs`. This makes it clear that `bar` is a submodule of `foo`,
and can only be accessed through `foo`.
RFC: 0016-module-file-system-hierarchy
Closes#14180
[breaking-change]
Fix#13732.
This is a revised, much less hacky form of PR #13753
The changes here:
* add instrumentation to aid debugging of linkage errors,
* fine tune some things in the Makefile where we are telling binaries to use a host-oriented path for finding dynamic libraries, when it should be feeding the binaries a target-oriented path for dynamic libraries.
* pass along the current stage number to run-make tests, and
* skip certain tests when running atop stage1.
Fix#13746 as well.
Two line summary: Distinguish HOST_RPATH and TARGET_RPATH; added
RPATH_LINK_SEARCH; skip tests broken in stage1; general cleanup.
`HOST_RPATH_VAR$(1)_T_$(2)_H_$(3)` and `TARGET_RPATH_VAR$(1)_T_$(2)_H_$(3)`
both match the format of the old `RPATH_VAR$(1)_T_$(2)_H_$(3)` (which
is still being set the same way that it was before, to one of either
HOST/TARGET depending on what stage we are building). Namely, the format
is <XXX>_RPATH_VAR = "<LD_LIB_PATH_ENVVAR>=<COLON_SEP_PATH_ENTRIES>"
What this commit does:
* Pass both of the (newly introduced) HOST and TARGET rpath setup vars
to `maketest.py`
* Update `maketest.py` to no longer update the LD_LIBRARY_PATH itself
Instead, it passes along the HOST and TARGET rpath setup vars in
environment variables `HOST_RPATH_ENV` and `TARGET_RPATH_ENV`
* Also, pass the current stage number to maketest.py; it in turn
passes it (via an env var) to run-make tests.
This allows the run-make tests to selectively change behavior
(e.g. turn themselves off) to deal with incompatibilities with
e.g. stage1.
* Cleanup: Distinguish in tools.mk between the command to run (`RUN`)
and the file to generate to drive that command (`RUN_BINFILE`). The
main thing this enables is that `RUN` can now setup the
`TARGET_RPATH_ENV` without having to dirty up the runner code in
each of the `run-make` Makefiles.
* Cleanup: Factored out commands to delete dylib/rlib into
REMOVE_DYLIBS/REMOVE_RLIBS.
There were places where we were only calling `rm $(call DYLIB,foo)`
even though we really needed to get rid of the whole glob (at least
based on alex's findings on #13753 that removing the symlink does not
suffice).
Therefore rather than peppering the code with the awkward
`rm $(TMPDIR)/$(call DYLIB_GLOB,foo)`, I instead introduced a common
`REMOVE_DYLIBS` user function that expands into that when called.
After I adding an analogous `REMOVE_RLIBS`, I changed all of the
existing calls that rm dylibs or rlibs to use these routines
instead.
Note that the latter is not a true refactoring since I may have
changed cases where it was our intent to only remove the sym-link.
(But if that is the case, then we need to more deeply investigate
alex's findings on #13753 where the system was still dynamically
loading up the non-symlinked libraries that it finds on the load
path.)
* Added RPATH_LINK_SEARCH command and use it on Linux.
On some platforms, namely Linux, when you have libboot.so that has
its internal rpath set (to e.g. $(ORIGIN)/path/to/HOSTDIR), the
linker still complains when you do the link step and it does not
know where to find libraries that libboot.so depends upon that live
in HOSTDIR (think e.g. librustuv.so).
As far as I can tell, the GNU linker will consult the
LD_LIBRARY_PATH as part of the linking process to find such
libraries. But if you want to be more careful and not override
LD_LIBRARY_PATH for the `gcc` invocation, then you need some other
way to tell the linker where it can find the libraries that
libboot.so needs. The solution to this on Linux is the
`-Wl,-rpath-link` command line option.
However, this command line option does not exist on Mac OS X, (which
appears to be figuring out how to resolve the libboot.dylib
dependency by some other means, perhaps by consulting the rpath
setting within libboot.dylib).
So, in order to abstract over this distinction, I added the
RPATH_LINK_SEARCH macro to the run-make infrastructure and added
calls to it where necessary to get Linux working. On architectures
other than Linux, the macro expands to nothing.
* Disable miscellaneous tests atop stage1.
* An especially interesting instance of the previous bullet point:
Excuse regex from doing rustdoc tests atop stage1.
This was a (nearly-) final step to get `make check-stage1` working
again.
The use of a special-case check for regex here is ugly but is
analogous other similar checks for regex such as the one that landed
in PR #13844.
The way this is written, the user will get a reminder that
doc-crate-regex is being skipped whenever their rules attempt to do
the crate documentation tests. This is deliberate: I want people
running `make check-stage1` to be reminded about which cases are
being skipped. (But if such echo noise is considered offensive, it
can obviously be removed.)
* Got windows working with the above changes.
This portion of the commit is a cleanup revision of the (previously
mentioned on try builds) re-architecting of how the LD_LIBRARY_PATH
setup and extension is handled in order to accommodate Windows' (1.)
use of `$PATH` for that purpose and (2.) use of spaces in `$PATH`
entries (problematic for make and for interoperation with tools at
the shell).
* In addition, since the code has been rearchitected to pass the
HOST_RPATH_DIR/TARGET_RPATH_DIR rather than a whole sh
environment-variable setting command, there is no need to for the
convert_path_spec calls in maketest.py, which in fact were put in
place to placate Windows but were now causing the Windows builds to
fail. Instead we just convert the paths to absolute paths just like
all of the other path arguments.
Also, note for makefile hackers: apparently you cannot quote operands
to `ifeq` in Makefile (or at least, you need to be careful about
adding them, e.g. to only one side).
Change `bytes!()` to return
{
static BYTES: &'static [u8] = &[...];
BYTES
}
This gives it the `'static` lifetime, whereas before it had an rvalue
lifetime. Until recently this would have prevented assigning `bytes!()`
to a static, as in
static FOO: &'static [u8] = bytes!(1,2,3);
but #14183 fixed it so blocks are now allowed in constant expressions
(with restrictions).
Fixes#11641.
Change `bytes!()` to return
{
static BYTES: &'static [u8] = &[...];
BYTES
}
This gives it the `'static` lifetime, whereas before it had an rvalue
lifetime. Until recently this would have prevented assigning `bytes!()`
to a static, as in
static FOO: &'static [u8] = bytes!(1,2,3);
but #14183 fixed it so blocks are now allowed in constant expressions
(with restrictions).
Fixes#11641.
This commit fills in the documentation holes for the FormatWriter trait which
were previously accidentally left blank. Additionally, this adds the `write_fmt`
method to the trait to allow usage of the `write!` macro with implementors of
the `FormatWriter` trait. This is not useful for consumers of the standard
library who should generally avoid the `FormatWriter` trait, but it is useful
for consumers of the core library who are not using the standard library.
This slightly adjusts the NullablePointer representation for some enums in the case where the non-nullable variant has a single field (the ptr field) to be just that, the pointer. This is in contrast to the current behaviour where we'd wrap that single pointer in a LLVM struct.
Fixes#11040 & #11303.
This commit fills in the documentation holes for the FormatWriter trait which
were previously accidentally left blank. Additionally, this adds the `write_fmt`
method to the trait to allow usage of the `write!` macro with implementors of
the `FormatWriter` trait. This is not useful for consumers of the standard
library who should generally avoid the `FormatWriter` trait, but it is useful
for consumers of the core library who are not using the standard library.
This commit is part of the libstd facade RFC, issue #13851. This creates a new
library, liballoc, which is intended to be the core allocation library for all
of Rust. It is pinned on the basic assumption that an allocation failure is an
abort or failure.
This module has inherited the heap/libc_heap modules from std::rt, the owned/rc
modules from std, and the arc module from libsync. These three pointers are
currently the three most core pointer implementations in Rust.
The UnsafeArc type in std::sync should be considered deprecated and replaced by
Arc<Unsafe<T>>. This commit does not currently migrate to this type, but future
commits will continue this refactoring.
This plugs a leak where resolve was treating enums defined in parent modules as
in-scope for all children modules when resolving a pattern identifier. This
eliminates the code path in resolve entirely.
If this breaks any existing code, then it indicates that the variants need to be
explicitly imported into the module.
Closes#14221
This is an implementation of RFC 16. A module can now only be loaded if the
module declaring `mod name;` "owns" the current directory. A module is
considered as owning its directory if it meets one of the following criteria:
* It is the top-level crate file
* It is a `mod.rs` file
* It was loaded via `#[path]`
* It was loaded via `include!`
* The module was declared via an inline `mod foo { ... }` statement
For example, this directory structure is now invalid
// lib.rs
mod foo;
// foo.rs
mod bar;
// bar.rs;
fn bar() {}
With this change `foo.rs` must be renamed to `foo/mod.rs`, and `bar.rs` must be
renamed to `foo/bar.rs`. This makes it clear that `bar` is a submodule of `foo`,
and can only be accessed through `foo`.
RFC: 0016-module-file-system-hierarchy
Closes#14180
[breaking-change]
This plugs a leak where resolve was treating enums defined in parent modules as
in-scope for all children modules when resolving a pattern identifier. This
eliminates the code path in resolve entirely.
If this breaks any existing code, then it indicates that the variants need to be
explicitly imported into the module.
Closes#14221
[breaking-change]
1. Wherever the `buf` field of a `Formatter` was used, the `Formatter` is used
instead.
2. The usage of `write_fmt` is minimized as much as possible, the `write!` macro
is preferred wherever possible.
3. Usage of `fmt::write` is minimized, favoring the `write!` macro instead.
Each test works by rendering the flowgraph for the last identified
block we see in expanded pretty-printed output, and comparing it (via
`diff`) against a checked in "foo.dot-expected.dot" file.
Each test post-processes the output to remove NodeIds ` (id=NUM)` so
that the expected output is somewhat stable (or at least independent
of how we assign NodeIds) and easier for a human to interpret when
looking at the expected output file itself.
----
Test writing style notes:
I usually tried to write the tests in a way that would avoid duplicate
labels in the output rendered flow graph, when possible.
The tests that have string literals "unreachable" in the program text
are deliberately written that way to remind the reader that the
unreachable nodes in the resulting graph are not an error in the
control flow computation, but rather a natural consequence of its
construction.
After discussion with Alex, we think the proper policy is for dtors
to not fail. This is consistent with C++. BufferedWriter already
does this, so this patch modifies TempDir to not fail in the dtor,
adding a `close` method for handling errors on destruction.
## Process API
The existing APIs for spawning processes took strings for the command
and arguments, but the underlying system may not impose utf8 encoding,
so this is overly limiting.
The assumption we actually want to make is just that the command and
arguments are viewable as [u8] slices with no interior NULLs, i.e., as
CStrings. The ToCStr trait is a handy bound for types that meet this
requirement (such as &str and Path).
However, since the commands and arguments are often a mixture of
strings and paths, it would be inconvenient to take a slice with a
single T: ToCStr bound. So this patch revamps the process creation API
to instead use a builder-style interface, called `Command`, allowing
arguments to be added one at a time with differing ToCStr
implementations for each.
The initial cut of the builder API has some drawbacks that can be
addressed once issue #13851 (libstd as a facade) is closed. These are
detailed as FIXMEs.
## Dynamic library API
`std::unstable::dynamic_library::open_external` currently takes a
`Path`, but because `Paths` produce normalized strings, this can
change the semantics of lookups in a given environment. This patch
generalizes the function to take a `ToCStr`-bounded type, which
includes both `Path`s and `str`s.
## ToCStr API
Adds ToCStr impl for &Path and ~str. This is a stopgap until DST (#12938) lands.
Until DST lands, we cannot decompose &str into & and str, so we cannot
usefully take ToCStr arguments by reference (without forcing an
additional & around &str). So we are instead temporarily adding an
instance for &Path and ~str, so that we can take ToCStr as owned. When
DST lands, the &Path instance should be removed, the string instances
should be revisted, and arguments bound by ToCStr should be passed by
reference.
FIXMEs have been added accordingly.
## Tickets closed
Closes#11650.
Closes#7928.
(Only after adding the tests did I realize that this is not really a
special case at the AST level; as far as the visitor is concerned,
`int` and `i32` and `i64` are just idents.)
Namely: non-pub `use` declarations *are* significant to the SVH
computation, since they can change which traits are part of the method
resolution step, and thus affect which methods get called from the
(potentially inlined) code.
The existing APIs for spawning processes took strings for the command
and arguments, but the underlying system may not impose utf8 encoding,
so this is overly limiting.
The assumption we actually want to make is just that the command and
arguments are viewable as [u8] slices with no interior NULLs, i.e., as
CStrings. The ToCStr trait is a handy bound for types that meet this
requirement (such as &str and Path).
However, since the commands and arguments are often a mixture of
strings and paths, it would be inconvenient to take a slice with a
single T: ToCStr bound. So this patch revamps the process creation API
to instead use a builder-style interface, called `Command`, allowing
arguments to be added one at a time with differing ToCStr
implementations for each.
The initial cut of the builder API has some drawbacks that can be
addressed once issue #13851 (libstd as a facade) is closed. These are
detailed as FIXMEs.
Closes#11650.
[breaking-change]
Provides better help for the resolve failures inside an `impl` if the name matches:
- a field on the self type
- a method on the self type
- a method on the current trait ref (in a trait impl)
Not handling trait method suggestions if in a regular `impl` (as you can see on line 69 of the test), I believe it is possible though.
Also, provides a better message when `self` fails to resolve due to being a static method.
It's using some unsafe pointers to skip copying the larger structures (which are only used in error conditions); it's likely possible to get it working with lifetimes (all the useful refs should outlive the visitor calls) but I haven't really figured that out for this case. (can switch to copying code if wanted)
Closes#2356.
This implements set_timeout() for std::io::Process which will affect wait()
operations on the process. This follows the same pattern as the rest of the
timeouts emerging in std::io::net.
The implementation was super easy for everything except libnative on unix
(backwards from usual!), which required a good bit of signal handling. There's a
doc comment explaining the strategy in libnative. Internally, this also required
refactoring the "helper thread" implementation used by libnative to allow for an
extra helper thread (not just the timer).
This is a breaking change in terms of the io::Process API. It is now possible
for wait() to fail, and subsequently wait_with_output(). These two functions now
return IoResult<T> due to the fact that they can time out.
Additionally, the wait_with_output() function has moved from taking `&mut self`
to taking `self`. If a timeout occurs while waiting with output, the semantics
are undesirable in almost all cases if attempting to re-wait on the process.
Equivalent functionality can still be achieved by dealing with the output
handles manually.
[breaking-change]
cc #13523
LLVM internally uses `uint64_t` for array size, but the corresponding
C API (`LLVMArrayType`) uses `unsigned int` so ths value is truncated.
Therefore rustc generates wrong type for fixed-sized large vector e.g.
`[0 x i8]` for `[0u8, ..(1 << 32)]`.
This patch adds `LLVMRustArrayType` function for `uint64_t` support.
Integers are always parsed as a u64 in libsyntax, but they're stored as i64. The
parser and pretty printer both printed an i64 instead of u64, sometimes
introducing an extra negative sign.
* Added `// no-pretty-expanded` to pretty-print a test, but not run it through
the `expanded` variant.
* Removed #[deriving] and other expanded attributes after they are expanded
* Removed hacks around &str and &&str and friends (from both the parser and the
pretty printer).
* Un-ignored a bunch of tests
After testing `--pretty normal`, it tries to run `--pretty expanded` and
typecheck output.
Here we don't check convergence since it really diverges: for every
iteration, some extra lines (e.g.`extern crate std`) are inserted.
Some tests are `ignore-pretty`-ed since they cause various issues
with `--pretty expanded`.
The compiler was updated to recognize that implementations for ty_uniq(..) are
allowed if the Box lang item is located in the current crate. This enforces the
idea that libcore cannot allocated, and moves all related trait implementations
from libcore to libstd.
This is a breaking change in that the AnyOwnExt trait has moved from the any
module to the owned module. Any previous users of std::any::AnyOwnExt should now
use std::owned::AnyOwnExt instead. This was done because the trait is intended
for Box traits and only Box traits.
[breaking-change]
Added a run-pass test to ensure that processes can be correctly spawned
using non-ASCII arguments, working directory, and environment variables.
It also tests Unicode support of os::env_as_bytes.
An additional assertion was added to the test for make_command_line to
verify it handles Unicode correctly.
Been meaning to try my hand at something like this for a while, and noticed something similar mentioned as part of #13537. The suggestion on the original ticket is to use `TcpStream::open(&str)` to pass in a host + port string, but seems a little cleaner to pass in host and port separately -- so a signature like `TcpStream::open(&str, u16)`.
Also means we can use std::io::net::addrinfo directly instead of using e.g. liburl to parse the host+port pair from a string.
One outstanding issue in this PR that I'm not entirely sure how to address: in open_timeout, the timeout_ms will apply for every A record we find associated with a hostname -- probably not the intended behavior, but I didn't want to waste my time on elaborate alternatives until the general idea was a-OKed. :)
Anyway, perhaps there are other reasons for us to prefer the original proposed syntax, but thought I'd get some thoughts on this. Maybe there are some solid reasons to prefer using liburl to do this stuff.
Prior to this commit, TcpStream::connect and TcpListener::bind took a
single SocketAddr argument. This worked well enough, but the API felt a
little too "low level" for most simple use cases.
A great example is connecting to rust-lang.org on port 80. Rust users would
need to:
1. resolve the IP address of rust-lang.org using
io::net::addrinfo::get_host_addresses.
2. check for errors
3. if all went well, use the returned IP address and the port number
to construct a SocketAddr
4. pass this SocketAddr to TcpStream::connect.
I'm modifying the type signature of TcpStream::connect and
TcpListener::bind so that the API is a little easier to use.
TcpStream::connect now accepts two arguments: a string describing the
host/IP of the host we wish to connect to, and a u16 representing the
remote port number.
Similarly, TcpListener::bind has been modified to take two arguments:
a string describing the local interface address (e.g. "0.0.0.0" or
"127.0.0.1") and a u16 port number.
Here's how to port your Rust code to use the new TcpStream::connect API:
// old ::connect API
let addr = SocketAddr{ip: Ipv4Addr{127, 0, 0, 1}, port: 8080};
let stream = TcpStream::connect(addr).unwrap()
// new ::connect API (minimal change)
let addr = SocketAddr{ip: Ipv4Addr{127, 0, 0, 1}, port: 8080};
let stream = TcpStream::connect(addr.ip.to_str(), addr.port()).unwrap()
// new ::connect API (more compact)
let stream = TcpStream::connect("127.0.0.1", 8080).unwrap()
// new ::connect API (hostname)
let stream = TcpStream::connect("rust-lang.org", 80)
Similarly, for TcpListener::bind:
// old ::bind API
let addr = SocketAddr{ip: Ipv4Addr{0, 0, 0, 0}, port: 8080};
let mut acceptor = TcpListener::bind(addr).listen();
// new ::bind API (minimal change)
let addr = SocketAddr{ip: Ipv4Addr{0, 0, 0, 0}, port: 8080};
let mut acceptor = TcpListener::bind(addr.ip.to_str(), addr.port()).listen()
// new ::bind API (more compact)
let mut acceptor = TcpListener::bind("0.0.0.0", 8080).listen()
[breaking-change]
This commit revisits the `cast` module in libcore and libstd, and scrutinizes
all functions inside of it. The result was to remove the `cast` module entirely,
folding all functionality into the `mem` module. Specifically, this is the fate
of each function in the `cast` module.
* transmute - This function was moved to `mem`, but it is now marked as
#[unstable]. This is due to planned changes to the `transmute`
function and how it can be invoked (see the #[unstable] comment).
For more information, see RFC 5 and #12898
* transmute_copy - This function was moved to `mem`, with clarification that is
is not an error to invoke it with T/U that are different
sizes, but rather that it is strongly discouraged. This
function is now #[stable]
* forget - This function was moved to `mem` and marked #[stable]
* bump_box_refcount - This function was removed due to the deprecation of
managed boxes as well as its questionable utility.
* transmute_mut - This function was previously deprecated, and removed as part
of this commit.
* transmute_mut_unsafe - This function doesn't serve much of a purpose when it
can be achieved with an `as` in safe code, so it was
removed.
* transmute_lifetime - This function was removed because it is likely a strong
indication that code is incorrect in the first place.
* transmute_mut_lifetime - This function was removed for the same reasons as
`transmute_lifetime`
* copy_lifetime - This function was moved to `mem`, but it is marked
`#[unstable]` now due to the likelihood of being removed in
the future if it is found to not be very useful.
* copy_mut_lifetime - This function was also moved to `mem`, but had the same
treatment as `copy_lifetime`.
* copy_lifetime_vec - This function was removed because it is not used today,
and its existence is not necessary with DST
(copy_lifetime will suffice).
In summary, the cast module was stripped down to these functions, and then the
functions were moved to the `mem` module.
transmute - #[unstable]
transmute_copy - #[stable]
forget - #[stable]
copy_lifetime - #[unstable]
copy_mut_lifetime - #[unstable]
[breaking-change]
Printing <no-bounds> on trait objects comes from a time when trait
objects had a non-empty default bounds set. As they no longer have any
default bounds, printing <no-bounds> is just noise.
Printing <no-bounds> on trait objects comes from a time when trait
objects had a non-empty default bounds set. As they no longer have any
default bounds, printing <no-bounds> is just noise.
With `~[T]` no longer growable, the `FromIterator` impl for `~[T]` doesn't make
much sense. Not only that, but nearly everywhere it is used is to convert from
a `Vec<T>` into a `~[T]`, for the sake of maintaining existing APIs. This turns
out to be a performance loss, as it means every API that returns `~[T]`, even a
supposedly non-copying one, is in fact doing extra allocations and memcpy's.
Even `&[T].to_owned()` is going through `Vec<T>` first.
Remove the `FromIterator` impl for `~[T]`, and adjust all the APIs that relied
on it to start using `Vec<T>` instead. This includes rewriting
`&[T].to_owned()` to be more efficient, among other performance wins.
Also add a new mechanism to go from `Vec<T>` -> `~[T]`, just in case anyone
truly needs that, using the new trait `FromVec`.
[breaking-change]
The code in resolve erroneously assumed that private enums weren't visited, so
the logic was adjusted to check to see if the enum definition itself was public.
Closes#11680
have no value. This also ensures that we can handle some obscure cases of fn
subtyping with bound regions that we didn't used to handle correctly.
Fixes#13974.
As part of #5527 I had to make some changes here and I just couldn't take it anymore. Refactor the writeback code. Should be functionally equivalent to the old stuff.
r? @pcwalton
This commit brings the local_data api up to modern rust standards with a few key
improvements:
* All functionality is now exposed as a method on the keys themselves. Instead
of importing std::local_data, you now use "key.set()" and "key.get()".
* All closures have been removed in favor of RAII functionality. This means that
get() and get_mut() no long require closures, but rather return
Option<SmartPointer> where the smart pointer takes care of relinquishing the
borrow and also implements the necessary Deref traits
* The modify() function was removed to cut the local_data interface down to its
bare essentials (similarly to how RefCell removed set/get).
[breaking-change]