(Only after adding the tests did I realize that this is not really a
special case at the AST level; as far as the visitor is concerned,
`int` and `i32` and `i64` are just idents.)
Namely: non-pub `use` declarations *are* significant to the SVH
computation, since they can change which traits are part of the method
resolution step, and thus affect which methods get called from the
(potentially inlined) code.
The existing APIs for spawning processes took strings for the command
and arguments, but the underlying system may not impose utf8 encoding,
so this is overly limiting.
The assumption we actually want to make is just that the command and
arguments are viewable as [u8] slices with no interior NULLs, i.e., as
CStrings. The ToCStr trait is a handy bound for types that meet this
requirement (such as &str and Path).
However, since the commands and arguments are often a mixture of
strings and paths, it would be inconvenient to take a slice with a
single T: ToCStr bound. So this patch revamps the process creation API
to instead use a builder-style interface, called `Command`, allowing
arguments to be added one at a time with differing ToCStr
implementations for each.
The initial cut of the builder API has some drawbacks that can be
addressed once issue #13851 (libstd as a facade) is closed. These are
detailed as FIXMEs.
Closes#11650.
[breaking-change]
Provides better help for the resolve failures inside an `impl` if the name matches:
- a field on the self type
- a method on the self type
- a method on the current trait ref (in a trait impl)
Not handling trait method suggestions if in a regular `impl` (as you can see on line 69 of the test), I believe it is possible though.
Also, provides a better message when `self` fails to resolve due to being a static method.
It's using some unsafe pointers to skip copying the larger structures (which are only used in error conditions); it's likely possible to get it working with lifetimes (all the useful refs should outlive the visitor calls) but I haven't really figured that out for this case. (can switch to copying code if wanted)
Closes#2356.
This implements set_timeout() for std::io::Process which will affect wait()
operations on the process. This follows the same pattern as the rest of the
timeouts emerging in std::io::net.
The implementation was super easy for everything except libnative on unix
(backwards from usual!), which required a good bit of signal handling. There's a
doc comment explaining the strategy in libnative. Internally, this also required
refactoring the "helper thread" implementation used by libnative to allow for an
extra helper thread (not just the timer).
This is a breaking change in terms of the io::Process API. It is now possible
for wait() to fail, and subsequently wait_with_output(). These two functions now
return IoResult<T> due to the fact that they can time out.
Additionally, the wait_with_output() function has moved from taking `&mut self`
to taking `self`. If a timeout occurs while waiting with output, the semantics
are undesirable in almost all cases if attempting to re-wait on the process.
Equivalent functionality can still be achieved by dealing with the output
handles manually.
[breaking-change]
cc #13523
LLVM internally uses `uint64_t` for array size, but the corresponding
C API (`LLVMArrayType`) uses `unsigned int` so ths value is truncated.
Therefore rustc generates wrong type for fixed-sized large vector e.g.
`[0 x i8]` for `[0u8, ..(1 << 32)]`.
This patch adds `LLVMRustArrayType` function for `uint64_t` support.
Integers are always parsed as a u64 in libsyntax, but they're stored as i64. The
parser and pretty printer both printed an i64 instead of u64, sometimes
introducing an extra negative sign.
* Added `// no-pretty-expanded` to pretty-print a test, but not run it through
the `expanded` variant.
* Removed #[deriving] and other expanded attributes after they are expanded
* Removed hacks around &str and &&str and friends (from both the parser and the
pretty printer).
* Un-ignored a bunch of tests
After testing `--pretty normal`, it tries to run `--pretty expanded` and
typecheck output.
Here we don't check convergence since it really diverges: for every
iteration, some extra lines (e.g.`extern crate std`) are inserted.
Some tests are `ignore-pretty`-ed since they cause various issues
with `--pretty expanded`.
The compiler was updated to recognize that implementations for ty_uniq(..) are
allowed if the Box lang item is located in the current crate. This enforces the
idea that libcore cannot allocated, and moves all related trait implementations
from libcore to libstd.
This is a breaking change in that the AnyOwnExt trait has moved from the any
module to the owned module. Any previous users of std::any::AnyOwnExt should now
use std::owned::AnyOwnExt instead. This was done because the trait is intended
for Box traits and only Box traits.
[breaking-change]
Added a run-pass test to ensure that processes can be correctly spawned
using non-ASCII arguments, working directory, and environment variables.
It also tests Unicode support of os::env_as_bytes.
An additional assertion was added to the test for make_command_line to
verify it handles Unicode correctly.
Been meaning to try my hand at something like this for a while, and noticed something similar mentioned as part of #13537. The suggestion on the original ticket is to use `TcpStream::open(&str)` to pass in a host + port string, but seems a little cleaner to pass in host and port separately -- so a signature like `TcpStream::open(&str, u16)`.
Also means we can use std::io::net::addrinfo directly instead of using e.g. liburl to parse the host+port pair from a string.
One outstanding issue in this PR that I'm not entirely sure how to address: in open_timeout, the timeout_ms will apply for every A record we find associated with a hostname -- probably not the intended behavior, but I didn't want to waste my time on elaborate alternatives until the general idea was a-OKed. :)
Anyway, perhaps there are other reasons for us to prefer the original proposed syntax, but thought I'd get some thoughts on this. Maybe there are some solid reasons to prefer using liburl to do this stuff.
Prior to this commit, TcpStream::connect and TcpListener::bind took a
single SocketAddr argument. This worked well enough, but the API felt a
little too "low level" for most simple use cases.
A great example is connecting to rust-lang.org on port 80. Rust users would
need to:
1. resolve the IP address of rust-lang.org using
io::net::addrinfo::get_host_addresses.
2. check for errors
3. if all went well, use the returned IP address and the port number
to construct a SocketAddr
4. pass this SocketAddr to TcpStream::connect.
I'm modifying the type signature of TcpStream::connect and
TcpListener::bind so that the API is a little easier to use.
TcpStream::connect now accepts two arguments: a string describing the
host/IP of the host we wish to connect to, and a u16 representing the
remote port number.
Similarly, TcpListener::bind has been modified to take two arguments:
a string describing the local interface address (e.g. "0.0.0.0" or
"127.0.0.1") and a u16 port number.
Here's how to port your Rust code to use the new TcpStream::connect API:
// old ::connect API
let addr = SocketAddr{ip: Ipv4Addr{127, 0, 0, 1}, port: 8080};
let stream = TcpStream::connect(addr).unwrap()
// new ::connect API (minimal change)
let addr = SocketAddr{ip: Ipv4Addr{127, 0, 0, 1}, port: 8080};
let stream = TcpStream::connect(addr.ip.to_str(), addr.port()).unwrap()
// new ::connect API (more compact)
let stream = TcpStream::connect("127.0.0.1", 8080).unwrap()
// new ::connect API (hostname)
let stream = TcpStream::connect("rust-lang.org", 80)
Similarly, for TcpListener::bind:
// old ::bind API
let addr = SocketAddr{ip: Ipv4Addr{0, 0, 0, 0}, port: 8080};
let mut acceptor = TcpListener::bind(addr).listen();
// new ::bind API (minimal change)
let addr = SocketAddr{ip: Ipv4Addr{0, 0, 0, 0}, port: 8080};
let mut acceptor = TcpListener::bind(addr.ip.to_str(), addr.port()).listen()
// new ::bind API (more compact)
let mut acceptor = TcpListener::bind("0.0.0.0", 8080).listen()
[breaking-change]
This commit revisits the `cast` module in libcore and libstd, and scrutinizes
all functions inside of it. The result was to remove the `cast` module entirely,
folding all functionality into the `mem` module. Specifically, this is the fate
of each function in the `cast` module.
* transmute - This function was moved to `mem`, but it is now marked as
#[unstable]. This is due to planned changes to the `transmute`
function and how it can be invoked (see the #[unstable] comment).
For more information, see RFC 5 and #12898
* transmute_copy - This function was moved to `mem`, with clarification that is
is not an error to invoke it with T/U that are different
sizes, but rather that it is strongly discouraged. This
function is now #[stable]
* forget - This function was moved to `mem` and marked #[stable]
* bump_box_refcount - This function was removed due to the deprecation of
managed boxes as well as its questionable utility.
* transmute_mut - This function was previously deprecated, and removed as part
of this commit.
* transmute_mut_unsafe - This function doesn't serve much of a purpose when it
can be achieved with an `as` in safe code, so it was
removed.
* transmute_lifetime - This function was removed because it is likely a strong
indication that code is incorrect in the first place.
* transmute_mut_lifetime - This function was removed for the same reasons as
`transmute_lifetime`
* copy_lifetime - This function was moved to `mem`, but it is marked
`#[unstable]` now due to the likelihood of being removed in
the future if it is found to not be very useful.
* copy_mut_lifetime - This function was also moved to `mem`, but had the same
treatment as `copy_lifetime`.
* copy_lifetime_vec - This function was removed because it is not used today,
and its existence is not necessary with DST
(copy_lifetime will suffice).
In summary, the cast module was stripped down to these functions, and then the
functions were moved to the `mem` module.
transmute - #[unstable]
transmute_copy - #[stable]
forget - #[stable]
copy_lifetime - #[unstable]
copy_mut_lifetime - #[unstable]
[breaking-change]
Printing <no-bounds> on trait objects comes from a time when trait
objects had a non-empty default bounds set. As they no longer have any
default bounds, printing <no-bounds> is just noise.
Printing <no-bounds> on trait objects comes from a time when trait
objects had a non-empty default bounds set. As they no longer have any
default bounds, printing <no-bounds> is just noise.
With `~[T]` no longer growable, the `FromIterator` impl for `~[T]` doesn't make
much sense. Not only that, but nearly everywhere it is used is to convert from
a `Vec<T>` into a `~[T]`, for the sake of maintaining existing APIs. This turns
out to be a performance loss, as it means every API that returns `~[T]`, even a
supposedly non-copying one, is in fact doing extra allocations and memcpy's.
Even `&[T].to_owned()` is going through `Vec<T>` first.
Remove the `FromIterator` impl for `~[T]`, and adjust all the APIs that relied
on it to start using `Vec<T>` instead. This includes rewriting
`&[T].to_owned()` to be more efficient, among other performance wins.
Also add a new mechanism to go from `Vec<T>` -> `~[T]`, just in case anyone
truly needs that, using the new trait `FromVec`.
[breaking-change]
The code in resolve erroneously assumed that private enums weren't visited, so
the logic was adjusted to check to see if the enum definition itself was public.
Closes#11680
have no value. This also ensures that we can handle some obscure cases of fn
subtyping with bound regions that we didn't used to handle correctly.
Fixes#13974.
As part of #5527 I had to make some changes here and I just couldn't take it anymore. Refactor the writeback code. Should be functionally equivalent to the old stuff.
r? @pcwalton
This commit brings the local_data api up to modern rust standards with a few key
improvements:
* All functionality is now exposed as a method on the keys themselves. Instead
of importing std::local_data, you now use "key.set()" and "key.get()".
* All closures have been removed in favor of RAII functionality. This means that
get() and get_mut() no long require closures, but rather return
Option<SmartPointer> where the smart pointer takes care of relinquishing the
borrow and also implements the necessary Deref traits
* The modify() function was removed to cut the local_data interface down to its
bare essentials (similarly to how RefCell removed set/get).
[breaking-change]
This commit brings the local_data api up to modern rust standards with a few key
improvements:
* The `pop` and `set` methods have been combined into one method, `replace`
* The `get_mut` method has been removed. All interior mutability should be done
through `RefCell`.
* All functionality is now exposed as a method on the keys themselves. Instead
of importing std::local_data, you now use "key.replace()" and "key.get()".
* All closures have been removed in favor of RAII functionality. This means that
get() and get_mut() no long require closures, but rather return
Option<SmartPointer> where the smart pointer takes care of relinquishing the
borrow and also implements the necessary Deref traits
* The modify() function was removed to cut the local_data interface down to its
bare essentials (similarly to how RefCell removed set/get).
[breaking-change]
Previously, the parser would not allow you to simultaneously implement a
function with a different abi as well as being unsafe at the same time. This
extends the parser to allow functions of the form:
unsafe extern fn foo() {
// ...
}
The closure type grammar was also changed to reflect this reversal, types
previously written as "extern unsafe fn()" must now be written as
"unsafe extern fn()". The parser currently has a hack which allows the old
style, but this will go away once a snapshot has landed.
Closes#10025
[breaking-change]
This pull request contains preparations for adding LLDB autotests:
+ the debuginfo tests are split into debuginfo-gdb and debuginfo-lldb
+ the `compiletest` tool is updated to support the debuginfo-lldb mode
+ tests.mk is modified to provide debuginfo-gdb and debuginfo-lldb make targets
+ GDB test cases are moved from `src/test/debug-info` to `src/test/debuginfo-gdb`
+ configure will now look for LLDB and set the appropriate CFG variables
+ the `lldb_batchmode.py` script is added to `src/etc`. It emulates GDB's batch mode
The LLDB autotests themselves are not part of this PR. Those will probable require some manual work on the test bots to make them work for the first time. Better to get these unproblematic preliminaries out of the way in a separate step.
This is the second step in implementing #13851. This PR cannot currently land until a snapshot exists with #13892, but I imagine that this review will take longer.
This PR refactors a large amount of functionality outside of the standard library into a new library, libcore. This new library has 0 dependencies (in theory). In practice, this library currently depends on these symbols being available:
* `rust_begin_unwind` and `rust_fail_bounds_check` - These are the two entry points of failure in libcore. The symbols are provided by libstd currently. In the future (see the bullets on #13851) this will be officially supported with nice error mesages. Additionally, there will only be one failure entry point once `std::fmt` migrates to libcore.
* `memcpy` - This is often generated by LLVM. This is also quite trivial to implement for any platform, so I'm not too worried about this.
* `memcmp` - This is required for comparing strings. This function is quite common *everywhere*, so I don't feel to bad about relying on a consumer of libcore to define it.
* `malloc` and `free` - This is quite unfortunate, and is a temporary stopgap until we deal with the `~` situation. More details can be found in the module `core::should_not_exist`
* `fmod` and `fmodf` - These exist because the `Rem` trait is defined in libcore, so the `Rem` implementation for floats must also be defined in libcore. I imagine that any platform using floating-point modulus will have these symbols anyway, and otherwise they will be optimized out.
* `fdim` and `fdimf` - Like `fmod`, these are from the `Signed` trait being defined in libcore. I don't expect this to be much of a problem
These dependencies all "Just Work" for now because libcore only exists as an rlib, not as a dylib.
The commits themselves are organized to show that the overall diff of this extraction is not all that large. Most modules were able to be moved with very few modifications. The primary module left out of this iteration is `std::fmt`. I plan on migrating the `fmt` module to libcore, but I chose to not do so at this time because it had implications on the `Writer` trait that I wanted to deal with in isolation. There are a few breaking changes in these commits, but they are fairly minor, and are all labeled with `[breaking-change]`.
The nastiest parts of this movement come up with `~[T]` and `~str` being language-defined types today. I believe that much of this nastiness will get better over time as we migrate towards `Vec<T>` and `Str` (or whatever the types will be named). There will likely always be some extension traits, but the situation won't be as bad as it is today.
Known deficiencies:
* rustdoc will get worse in terms of readability. This is the next issue I will tackle as part of #13851. If others think that the rustdoc change should happen first, I can also table this to fix rustdoc first.
* The compiler reveals that all these types are reexports via error messages like `core::option::Option`. This is filed as #13065, and I believe that issue would have a higher priority now. I do not currently plan on fixing that as part of #13851. If others believe that this issue should be fixed, I can also place it on the roadmap for #13851.
I recommend viewing these changes on a commit-by-commit basis. The overall change is likely too overwhelming to take in.
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
This change makes internal compile errors in the compile-fail tests failures.
I believe this is the correct behaviour- those tests are intended to assert that the compiler doesn't proceed, not that it explodes.
So far, it fails on 4 tests in my environment, my testcase for #13943 which is what caused me to tackle this, and 3 others:
```
failures:
[compile-fail] compile-fail/incompatible-tuple.rs # This one is mine and not on master
[compile-fail] compile-fail/inherit-struct8.rs
[compile-fail] compile-fail/issue-9725.rs
[compile-fail] compile-fail/unsupported-cast.rs
```
for `~str`/`~[]`.
Note that `~self` still remains, since I forgot to add support for
`Box<self>` before the snapshot.
How to update your code:
* Instead of `~EXPR`, you should write `box EXPR`.
* Instead of `~TYPE`, you should write `Box<Type>`.
* Instead of `~PATTERN`, you should write `box PATTERN`.
[breaking-change]
Previously, the parser would not allow you to simultaneously implement a
function with a different abi as well as being unsafe at the same time. This
extends the parser to allow functions of the form:
unsafe extern fn foo() {
// ...
}
The closure type grammar was also changed to reflect this reversal, types
previously written as "extern unsafe fn()" must now be written as
"unsafe extern fn()". The parser currently has a hack which allows the old
style, but this will go away once a snapshot has landed.
Closes#10025
[breaking-change]
Currently, rustc requires that a linkage be a product of 100% rlibs or 100%
dylibs. This is to satisfy the requirement that each object appear at most once
in the final output products. This is a bit limiting, and the upcoming libcore
library cannot exist as a dylib, so these rules must change.
The goal of this commit is to enable *some* use cases for mixing rlibs and
dylibs, primarily libcore's use case. It is not targeted at allowing an
exhaustive number of linkage flavors.
There is a new dependency_format module in rustc which calculates what format
each upstream library should be linked as in each output type of the current
unit of compilation. The module itself contains many gory details about what's
going on here.
cc #10729
The code in resolve erroneously assumed that private enums weren't visited, so
the logic was adjusted to check to see if the enum definition itself was public.
Closes#11680
By carefully distinguishing falling back to the default arm from moving
on to the next pattern, this patch adjusts the codegen logic for range
and guarded arms of pattern matching expression. It is a more
appropriate way of fixing #12582 and #13027 without causing regressions
such as #13867.
Closes#13867
The logging macros now create a LogRecord, and pass that to the Logger. This will allow custom loggers to change the formatting, and possible filter on more properties of the log record.
DefaultLogger's formatting was taken from Python's default formatting:
`LEVEL:from: message`
Also included: fmt::Arguments now implement Show, so they can be used to
extend format strings.
@alexcrichton r?
The logging macros now create a LogRecord, and pass that to the
Logger, instead of passing a `level` and `args`. The new signature is:
trait Logger {
fn log(&mut self, record: &LogRecord);
}
The LogRecord includes additional values that may be useful to custom
loggers, and also allows for further expansion if not values are found
useful.
DefaultLogger's formatting was taken from Python's default formatting:
`LEVEL:from: message`
Also included: fmt::Arguments now implement Show, so they can be used to
extend format strings.
[breaking-change]
This patch fixes issue #13186.
When generating constant expression for enum, it is possible that
alignment of expression may be not equal to alignment of type. In that
case space after last struct field must be padded to match size of value
and size of struct. This commit adds that padding.
See detailed explanation in src/test/run-pass/trans-tag-static-padding.rs
By carefully distinguishing falling back to the default arm from moving
on to the next pattern, this patch adjusts the codegen logic for range
and guarded arms of pattern matching expression. It is a more
appropriate way of fixing #12582 and #13027 without causing regressions
such as #13867.
Closes#13867
Some cases were not correctly handled by this lint, for instance `let a = 42u8; a < 0` and `let a = 42u8; a > 255`.
It led to the discovery of two useless comparisons, which I removed.
This has long since not been too relevant since the introduction of many crate
type outputs. This commit removes the flag entirely, adjusting all logic to do
the most reasonable thing when building both a library and an executable.
Closes#13337
- using libgreen to optimize CPU usage
- less tasks to limit wasted resources
Here, on a one core 2 threads CPU, new version is ~1.2 faster. May
be better with more core.
This ensures that private functions exported through static initializers will
actually end up being public in the object file (so other objects can continue
to reference the function).
Closes#13620
- using libgreen to optimize CPU usage
- less tasks to limit wasted resources
Here, on a one core 2 threads CPU, new version is ~1.2 faster. May
be better with more core.
Closes#7575.
I don't think the change from a contains lookup to an iteration of the HashSet in the resolver should be much of a burden as the set of methods with the same name should be relatively small.
This is a first patch towards an opt-in built-in trait world. This patch removes the restriction on built-in traits and allows such traits to be derived.
[RFC#3]
cc #13231
@nikomatsakis r?
This ensures that private functions exported through static initializers will
actually end up being public in the object file (so other objects can continue
to reference the function).
Closes#13620
This has long since not been too relevant since the introduction of many crate
type outputs. This commit removes the flag entirely, adjusting all logic to do
the most reasonable thing when building both a library and an executable.
Closes#13337
Currently, rustc requires that a linkage be a product of 100% rlibs or 100%
dylibs. This is to satisfy the requirement that each object appear at most once
in the final output products. This is a bit limiting, and the upcoming libcore
library cannot exist as a dylib, so these rules must change.
The goal of this commit is to enable *some* use cases for mixing rlibs and
dylibs, primarily libcore's use case. It is not targeted at allowing an
exhaustive number of linkage flavors.
There is a new dependency_format module in rustc which calculates what format
each upstream library should be linked as in each output type of the current
unit of compilation. The module itself contains many gory details about what's
going on here.
cc #10729
Commits for details.
This shouldn't change the generated code at all (except for switching to `LitBinary` from an explicit ExprVec of individual ExprLit bytes for `prefix_bytes`).
Pre-step towards issue #12624 and others: Introduce ExprUseVisitor, remove the
moves computation. ExprUseVisitor is a visitor that walks the AST for a
function and calls a delegate to inform it where borrows, copies, and moves
occur.
In this patch, I rewrite the gather_loans visitor to use ExprUseVisitor, but in
future patches, I think we could rewrite regionck, check_loans, and possibly
other passes to use it as well. This would refactor the repeated code between
those places that tries to determine where copies/moves/etc occur.
r? @alexcrichton
Hi rust enthusiasts,
With this patch I propose to add a "streaming" API to the existing json parser in libserialize.
By "streaming" I mean a parser that let you act on JsonEvents that are generated as while parsing happens, as opposed to parsing the entire source, generating a big data structure and working with this data structure. I think both approaches have their pros and cons so this pull request adds the streaming API, preserving the existing one.
The streaming API is simple: It consist into an Iterator<JsonEvent> that consumes an Iterator<char>. JsonEvent is an enum with values such as NumberValue(f64), BeginList, EndList, BeginObject, etc.
The user would ideally use the API as follows:
```
for evt in StreamingParser::new(src) {
match evt {
BeginList => {
// ...
}
// ...
}
}
```
The iterator provides a stack() method returning a slice of StackNodes which represent "where we currently are" in the logical structure of the json stream (for instance at "foo.bar[3].x" you get [ Key("foo"), Key("bar"), Index(3), Key("x") ].)
I wrote "ideally" above because the current way rust expands for loops, you can't call the stack() method because the iterator is already borrowed. So for know you need to manually advance the iterator in the loop. I hope this is something we can cope with, until for loops are better integrated with the compiler.
Streaming parsers are useful when you want to read from a json stream, generate a custom data structure and you know how the json is going to be structured. For example, imagine you have to parse a 3D mesh file represented in the json format. In this case you probably expect to have large arrays of vertices and using the generic parser will be very inefficient because it will create a big list of all these vertices, which you will copy into a contiguous array afterwards (so you end up doing a lot of small allocations, parsing the json once and parsing the data structure afterwards). With a streaming parser, you can add the vertices to a contiguous array as they come in without paying the cost of creating the intermediate Json data structure. You have much fewer allocations since you write directly in the final data structure and you can be smart in how you will pre-allocate it.
I added added this directly into serialize::json rather than in its own library because it turns out I can reuse most of the existing code whereas maintaining a separate library (which I did originally) forces me to duplicate this code.
I wrote this trying to minimize the size of the patch so there may be places where the code could be nicer at the expenses of more changes (let me know what you prefer).
This is my first (potential) contribution to rust, so please let me know if I am doing something wrong (maybe I should have first introduced this proposition in the mailing list, or opened a github issue, etc.?). I work a few meters away from @pknfelix so I am not too hard to find :)
When a syntax extension is loaded by the compiler, the dylib that is opened may
have other dylibs that it depends on. The dynamic linker must be able to find
these libraries on the system or else the library will fail to load.
Currently, unix gets by with the use of rpaths. This relies on the dylib not
moving around too drastically relative to its dependencies. For windows,
however, this is no rpath available, and in theory unix should work without
rpaths as well.
This modifies the compiler to add all -L search directories to the dynamic
linker's set of load paths. This is currently managed through environment
variables for each platform.
Closes#13848
When a syntax extension is loaded by the compiler, the dylib that is opened may
have other dylibs that it depends on. The dynamic linker must be able to find
these libraries on the system or else the library will fail to load.
Currently, unix gets by with the use of rpaths. This relies on the dylib not
moving around too drastically relative to its dependencies. For windows,
however, this is no rpath available, and in theory unix should work without
rpaths as well.
This modifies the compiler to add all -L search directories to the dynamic
linker's set of load paths. This is currently managed through environment
variables for each platform.
Closes#13848
The compiler has previously been producing binaries on the order of 1.8MB for
hello world programs "fn main() {}". This is largely a result of the compilation
model used by compiling entire libraries into a single object file and because
static linking is favored by default.
When linking, linkers will pull in the entire contents of an object file if any
symbol from the object file is used. This means that if any symbol from a rust
library is used, the entire library is pulled in unconditionally, regardless of
whether the library is used or not.
Traditional C/C++ projects do not normally encounter these large executable
problems because their archives (rust's rlibs) are composed of many objects.
Because of this, linkers can eliminate entire objects from being in the final
executable. With rustc, however, the linker does not have the opportunity to
leave out entire object files.
In order to get similar benefits from dead code stripping at link time, this
commit enables the -ffunction-sections and -fdata-sections flags in LLVM, as
well as passing --gc-sections to the linker *by default*. This means that each
function and each global will be placed into its own section, allowing the
linker to GC all unused functions and data symbols.
By enabling these flags, rust is able to generate much smaller binaries default.
On linux, a hello world binary went from 1.8MB to 597K (a 67% reduction in
size). The output size of dynamic libraries remained constant, but the output
size of rlibs increased, as seen below:
libarena - 2.27% bigger
libcollections - 0.64% bigger
libflate - 0.85% bigger
libfourcc - 14.67% bigger
libgetopts - 4.52% bigger
libglob - 2.74% bigger
libgreen - 9.68% bigger
libhexfloat - 13.68% bigger
liblibc - 10.79% bigger
liblog - 10.95% bigger
libnative - 8.34% bigger
libnum - 2.31% bigger
librand - 1.71% bigger
libregex - 6.43% bigger
librustc - 4.21% bigger
librustdoc - 8.98% bigger
librustuv - 4.11% bigger
libsemver - 2.68% bigger
libserialize - 1.92% bigger
libstd - 3.59% bigger
libsync - 3.96% bigger
libsyntax - 4.96% bigger
libterm - 13.96% bigger
libtest - 6.03% bigger
libtime - 2.86% bigger
liburl - 6.59% bigger
libuuid - 4.70% bigger
libworkcache - 8.44% bigger
This increase in size is a result of encoding many more section names into each
object file (rlib). These increases are moderate enough that this change seems
worthwhile to me, due to the drastic improvements seen in the final artifacts.
The overall increase of the stage2 target folder (not the size of an install)
went from 337MB to 348MB (3% increase).
Additionally, linking is generally slower when executed with all these new
sections plus the --gc-sections flag. The stage0 compiler takes 1.4s to link the
`rustc` binary, where the stage1 compiler takes 1.9s to link the binary. Three
megabytes are shaved off the binary. I found this increase in link time to be
acceptable relative to the benefits of code size gained.
This commit only enables --gc-sections for *executables*, not dynamic libraries.
LLVM does all the heavy lifting when producing an object file for a dynamic
library, so there is little else for the linker to do (remember that we only
have one object file).
I conducted similar experiments by putting a *module's* functions and data
symbols into its own section (granularity moved to a module level instead of a
function/static level). The size benefits of a hello world were seen to be on
the order of 400K rather than 1.2MB. It seemed that enough benefit was gained
using ffunction-sections that this route was less desirable, despite the lesser
increases in binary rlib size.
The compiler has previously been producing binaries on the order of 1.8MB for
hello world programs "fn main() {}". This is largely a result of the compilation
model used by compiling entire libraries into a single object file and because
static linking is favored by default.
When linking, linkers will pull in the entire contents of an object file if any
symbol from the object file is used. This means that if any symbol from a rust
library is used, the entire library is pulled in unconditionally, regardless of
whether the library is used or not.
Traditional C/C++ projects do not normally encounter these large executable
problems because their archives (rust's rlibs) are composed of many objects.
Because of this, linkers can eliminate entire objects from being in the final
executable. With rustc, however, the linker does not have the opportunity to
leave out entire object files.
In order to get similar benefits from dead code stripping at link time, this
commit enables the -ffunction-sections and -fdata-sections flags in LLVM, as
well as passing --gc-sections to the linker *by default*. This means that each
function and each global will be placed into its own section, allowing the
linker to GC all unused functions and data symbols.
By enabling these flags, rust is able to generate much smaller binaries default.
On linux, a hello world binary went from 1.8MB to 597K (a 67% reduction in
size). The output size of dynamic libraries remained constant, but the output
size of rlibs increased, as seen below:
libarena - 2.27% bigger ( 292872 => 299508)
libcollections - 0.64% bigger ( 6765884 => 6809076)
libflate - 0.83% bigger ( 186516 => 188060)
libfourcc - 14.71% bigger ( 307290 => 352498)
libgetopts - 4.42% bigger ( 761468 => 795102)
libglob - 2.73% bigger ( 899932 => 924542)
libgreen - 9.63% bigger ( 1281718 => 1405124)
libhexfloat - 13.88% bigger ( 333738 => 380060)
liblibc - 10.79% bigger ( 551280 => 610736)
liblog - 10.93% bigger ( 218208 => 242060)
libnative - 8.26% bigger ( 1362096 => 1474658)
libnum - 2.34% bigger ( 2583400 => 2643916)
librand - 1.72% bigger ( 1608684 => 1636394)
libregex - 6.50% bigger ( 1747768 => 1861398)
librustc - 4.21% bigger (151820192 => 158218924)
librustdoc - 8.96% bigger ( 13142604 => 14320544)
librustuv - 4.13% bigger ( 4366896 => 4547304)
libsemver - 2.66% bigger ( 396166 => 406686)
libserialize - 1.91% bigger ( 6878396 => 7009822)
libstd - 3.59% bigger ( 39485286 => 40902218)
libsync - 3.95% bigger ( 1386390 => 1441204)
libsyntax - 4.96% bigger ( 35757202 => 37530798)
libterm - 13.99% bigger ( 924580 => 1053902)
libtest - 6.04% bigger ( 2455720 => 2604092)
libtime - 2.84% bigger ( 1075708 => 1106242)
liburl - 6.53% bigger ( 590458 => 629004)
libuuid - 4.63% bigger ( 326350 => 341466)
libworkcache - 8.45% bigger ( 1230702 => 1334750)
This increase in size is a result of encoding many more section names into each
object file (rlib). These increases are moderate enough that this change seems
worthwhile to me, due to the drastic improvements seen in the final artifacts.
The overall increase of the stage2 target folder (not the size of an install)
went from 337MB to 348MB (3% increase).
Additionally, linking is generally slower when executed with all these new
sections plus the --gc-sections flag. The stage0 compiler takes 1.4s to link the
`rustc` binary, where the stage1 compiler takes 1.9s to link the binary. Three
megabytes are shaved off the binary. I found this increase in link time to be
acceptable relative to the benefits of code size gained.
This commit only enables --gc-sections for *executables*, not dynamic libraries.
LLVM does all the heavy lifting when producing an object file for a dynamic
library, so there is little else for the linker to do (remember that we only
have one object file).
I conducted similar experiments by putting a *module's* functions and data
symbols into its own section (granularity moved to a module level instead of a
function/static level). The size benefits of a hello world were seen to be on
the order of 400K rather than 1.2MB. It seemed that enough benefit was gained
using ffunction-sections that this route was less desirable, despite the lesser
increases in binary rlib size.
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
Similar to my recent changes to ~[T]/&[T], these changes remove the vstore abstraction and represent str types as ~(str) and &(str). The Option<uint> in ty_str is the length of the string, None if the string is dynamically sized.
This addresses the ICE from #13763, but it does not allow the test to compile,
due to #13768. An alternate test was checked in in the meantime.
Closes#13763
This addresses the ICE from #13763, but it does not allow the test to compile,
due to #13768. An alternate test was checked in in the meantime.
Closes#13763
It didn't work because it tried to call itself but symbols are not
exported as default in executables.
Note that `fun5` is not internal anymore since it is in library.
Second commit removes/updates some old tests.
It didn't work because it tried to call itself but symbols are not
exported as default in executables.
Note that `fun5` is not internal anymore since it is in library.
Implements [RFC 7](https://github.com/rust-lang/rfcs/blob/master/active/0007-regexps.md) and will hopefully resolve#3591. The crate is marked as experimental. It includes a syntax extension for compiling regexps to native Rust code.
Embeds and passes the `basic`, `nullsubexpr` and `repetition` tests from [Glenn Fowler's (slightly modified by Russ Cox for leftmost-first semantics) testregex test suite](http://www2.research.att.com/~astopen/testregex/testregex.html). I've also hand written a plethora of other tests that exercise Unicode support, the parser, public API, etc. Also includes a `regex-dna` benchmark for the shootout.
I know the addition looks huge at first, but consider these things:
1. More than half the number of lines is dedicated to Unicode character classes.
2. Of the ~4,500 lines remaining, 1,225 of them are comments.
3. Another ~800 are tests.
4. That leaves 2500 lines for the meat. The parser is ~850 of them. The public API, compiler, dynamic VM and code generator (for `regexp!`) make up the rest.
moves computation. ExprUseVisitor is a visitor that walks the AST for a
function and calls a delegate to inform it where borrows, copies, and moves
occur.
In this patch, I rewrite the gather_loans visitor to use ExprUseVisitor, but in
future patches, I think we could rewrite regionck, check_loans, and possibly
other passes to use it as well. This would refactor the repeated code between
those places that tries to determine where copies/moves/etc occur.
Specifically, the method parameter cardinality mismatch or missing
method error message span now gets method itself exactly. It was the
whole expression.
Closes#9390Closes#13684Closes#13709
This patch removes the special auto-rooting for `@` from the borrow checker. With `@` moving into a library, it doesn't make sense to keep this code around anymore. It also simplifies `trans` by removing root checking from there
@nikomatsakis
Closes: #11586
This is really only useful for #[cfg()]. For example:
```rust
enum Foo {
Bar,
Baz,
#[cfg(blob)]
Blob
}
fn match_foos(f: &Foo) {
match *f {
Bar => {}
Baz => {}
#[cfg(blob)]
Blob => {}
}
}
```
This is a kind of weird place to allow attributes, so it should probably
be discussed before merging.
The constructor for `TaskBuilder` is being changed to an associated
function called `new` for consistency with the rest of the standard
library.
Closes#13666
[breaking-change]
The constructor for `TaskBuilder` is being changed to an associated
function called `new` for consistency with the rest of the standard
library.
Closes#13666
[breaking-change]
Specifically, the method parameter cardinality mismatch or missing
method error message span now gets method itself exactly. It was the
whole expression.
Closes#9390Closes#13684Closes#13709
This allows the use of syntax extensions when cross-compiling (fixing #12102). It does this by encoding the target triple in the crate metadata and checking it when searching for files. Currently the crate triple must match the host triple when there is a macro_registrar_fn, it must match the target triple when linking, and can match either when only macro_rules! macros are used.
due to carelessness, this is pretty much a duplicate of https://github.com/mozilla/rust/pull/13450.
This adds the target triple to the crate metadata.
When searching for a crate the phase (link, syntax) is taken into account.
During link phase only crates matching the target triple are considered.
During syntax phase, either the target or host triple will be accepted, unless
the crate defines a macro_registrar, in which case only the host triple will
match.
This alters the borrow checker's requirements on invoking closures from
requiring an immutable borrow to requiring a unique immutable borrow. This means
that it is illegal to invoke a closure through a `&` pointer because there is no
guarantee that is not aliased. This does not mean that a closure is required to
be in a mutable location, but rather a location which can be proven to be
unique (often through a mutable pointer).
For example, the following code is unsound and is no longer allowed:
type Fn<'a> = ||:'a;
fn call(f: |Fn|) {
f(|| {
f(|| {})
});
}
fn main() {
call(|a| {
a();
});
}
There is no replacement for this pattern. For all closures which are stored in
structures, it was previously allowed to invoke the closure through `&self` but
it now requires invocation through `&mut self`.
The standard library has a good number of violations of this new rule, but the
fixes will be separated into multiple breaking change commits.
Closes#12224
The filestem of the desired output isn't necessarily a valid crate id, and
calling unwrap() will trigger an ICE in rustc. This tries a little harder to
infer a "valid crate id" from a crate, with an eventual fallback to a generic
crate id if alll else fails.
Closes#11107
This alters the borrow checker's requirements on invoking closures from
requiring an immutable borrow to requiring a unique immutable borrow. This means
that it is illegal to invoke a closure through a `&` pointer because there is no
guarantee that is not aliased. This does not mean that a closure is required to
be in a mutable location, but rather a location which can be proven to be
unique (often through a mutable pointer).
For example, the following code is unsound and is no longer allowed:
type Fn<'a> = ||:'a;
fn call(f: |Fn|) {
f(|| {
f(|| {})
});
}
fn main() {
call(|a| {
a();
});
}
There is no replacement for this pattern. For all closures which are stored in
structures, it was previously allowed to invoke the closure through `&self` but
it now requires invocation through `&mut self`.
The standard library has a good number of violations of this new rule, but the
fixes will be separated into multiple breaking change commits.
Closes#12224
[breaking-change]
The BSD builders are failing with a different error that is not a timeout error
(Connection reset by peer), so this test isn't really all that useful on
freebsd. Due to a lack of a better idea of how to test a connect timeout, this
test is going to just be ignored for now.
The BSD builders are failing with a different error that is not a timeout error
(Connection reset by peer), so this test isn't really all that useful on
freebsd. Due to a lack of a better idea of how to test a connect timeout, this
test is going to just be ignored for now.
Now with proper checking of enums and allows unsized fields as the last field in a struct or variant. This PR only checks passing of unsized types and distinguishing them from sized ones. To be safe we also need to control storage.
Closes issues #12969 and #13121, supersedes #13375 (all the discussion there is valid here too).
This currently requires linking against a library like libquadmath (or
libgcc), because compiler-rt barely has any support for this and most
hardware does not yet have 128-bit precision floating point. For this
reason, it's currently hidden behind a feature gate.
When compiler-rt is updated to trunk, some tests can be added for
constant evaluation since there will be support for the comparison
operators.
Closes#13381
When reporting "consider removing this semicolon" hint message, the
offending semicolon may come from macro call site instead of macro
itself. Using the more appropriate span makes the hint more helpful.
Closes#13428.
This gives a better NOTE error message when a privacy error is encountered with
a static method. Previously no note was emitted (due to lack of support), but
now a note is emitted indicating that the struct/enum itself is private.
Closes#13641
This gives a better NOTE error message when a privacy error is encountered with
a static method. Previously no note was emitted (due to lack of support), but
now a note is emitted indicating that the struct/enum itself is private.
Closes#13641
This adds a `TcpStream::connect_timeout` function in order to assist opening
connections with a timeout (cc #13523). There isn't really much design space for
this specific operation (unlike timing out normal blocking reads/writes), so I
am fairly confident that this is the correct interface for this function.
The function is marked #[experimental] because it takes a u64 timeout argument,
and the u64 type is likely to change in the future.
This adds a `TcpStream::connect_timeout` function in order to assist opening
connections with a timeout (cc #13523). There isn't really much design space for
this specific operation (unlike timing out normal blocking reads/writes), so I
am fairly confident that this is the correct interface for this function.
The function is marked #[experimental] because it takes a u64 timeout argument,
and the u64 type is likely to change in the future.
When reporting "consider removing this semicolon" hint message, the
offending semicolon may come from macro call site instead of macro
itself. Using the more appropriate span makes the hint more helpful.
Closes#13428.
Syntax-only crates are no longer registered with the cstore, so there's no need
to allocate crate numbers to them. This ends up leaving gaps in the crate
numbering scheme which is not expected in the rest of the compiler.
Closes#13560
Fix#12856.
I wanted to put this up first because I wanted to get feedback about the second commit in the series, commit 8599236. Its the more invasive part of the patch and is largely just belt-and-suspenders assertion checking; in the commit message I mentioned at least one other approach we could take here. Or we could drop the belt-and-suspenders and just rely on the guard added in the first patch, commit 8d6a005 (which is really quite trivial on its own).
So any feedback on what would be better is appreciated.
r? @nikomatsakis
When instantiating trait default methods for certain implementation,
`typeck` correctly combined type parameters from trait bound with those
from method bound, but didn't do so for lifetime parameters. Applies
the same logic to lifetime parameters.
Closes#13204
This commit changes the way move errors are reported when some value is
captured by a PatIdent. First, we collect all of the "cannot move out
of" errors before reporting them, and those errors with the same "move
source" are reported together. If the move is caused by a PatIdent (that
binds by value), we add a note indicating where it is and suggest the
user to put `ref` if they don't want the value to move. This makes the
"cannot move out of" error in match expression nicer (though the extra
note may not feel that helpful in other places :P). For example, with
the following code snippet,
```rust
enum Foo {
Foo1(~u32, ~u32),
Foo2(~u32),
Foo3,
}
fn main() {
let f = &Foo1(~1u32, ~2u32);
match *f {
Foo1(num1, num2) => (),
Foo2(num) => (),
Foo3 => ()
}
}
```
Errors before the change:
```rust
test.rs:10:9: 10:25 error: cannot move out of dereference of `&`-pointer
test.rs:10 Foo1(num1, num2) => (),
^~~~~~~~~~~~~~~~
test.rs:10:9: 10:25 error: cannot move out of dereference of `&`-pointer
test.rs:10 Foo1(num1, num2) => (),
^~~~~~~~~~~~~~~~
test.rs:11:9: 11:18 error: cannot move out of dereference of `&`-pointer
test.rs:11 Foo2(num) => (),
^~~~~~~~~
```
After:
```rust
test.rs:9:11: 9:13 error: cannot move out of dereference of `&`-pointer
test.rs:9 match *f {
^~
test.rs:10:14: 10:18 note: attempting to move value to here (to prevent the move, use `ref num1` or `ref mut num1` to capture value by reference)
test.rs:10 Foo1(num1, num2) => (),
^~~~
test.rs:10:20: 10:24 note: and here (use `ref num2` or `ref mut num2`)
test.rs:10 Foo1(num1, num2) => (),
^~~~
test.rs:11:14: 11:17 note: and here (use `ref num` or `ref mut num`)
test.rs:11 Foo2(num) => (),
^~~
```
Close#8064
Syntax-only crates are no longer registered with the cstore, so there's no need
to allocate crate numbers to them. This ends up leaving gaps in the crate
numbering scheme which is not expected in the rest of the compiler.
Closes#13560
When instantiating trait default methods for certain implementation,
`typeck` correctly combined type parameters from trait bound with those
from method bound, but didn't do so for lifetime parameters. Applies
the same logic to lifetime parameters.
Closes#13204
This removes the `priv` keyword from the language and removes private enum
variants as a result. The remaining use cases of private enum variants were all
updated to be a struct with one private field that is a private enum.
RFC: 0006-remove-priv
Closes#13535
There's now one unified way to return things from a macro, instead of
being able to choose the `AnyMacro` trait or the `MRItem`/`MRExpr`
variants of the `MacResult` enum. This does simplify the logic handling
the expansions, but the biggest value of this is it makes macros in (for
example) type position easier to implement, as there's this single thing
to modify.
By my measurements (using `-Z time-passes` on libstd and librustc etc.),
this appears to have little-to-no impact on expansion speed. There are
presumably larger costs than the small number of extra allocations and
virtual calls this adds (notably, all `macro_rules!`-defined macros have
not changed in behaviour, since they had to use the `AnyMacro` trait
anyway).
---
Summary of changes for dynamic syntax extension maintainers:
- `MacResult` is now a trait, and is returned as `~MacResult`
- `MRExpr` & `MRItem` are now `MacExpr::new` and `MacItem:new` respectively (which return `~MacResult`s)
- `MacResult::dummy_...` is `DummyResult::any` or `DummyResult::expr`
There's now one unified way to return things from a macro, instead of
being able to choose the `AnyMacro` trait or the `MRItem`/`MRExpr`
variants of the `MacResult` enum. This does simplify the logic handling
the expansions, but the biggest value of this is it makes macros in (for
example) type position easier to implement, as there's this single thing
to modify.
By my measurements (using `-Z time-passes` on libstd and librustc etc.),
this appears to have little-to-no impact on expansion speed. There are
presumably larger costs than the small number of extra allocations and
virtual calls this adds (notably, all `macro_rules!`-defined macros have
not changed in behaviour, since they had to use the `AnyMacro` trait
anyway).
Closes#13546 (workcache: Don't assume gcc exists on all platforms)
Closes#13545 (std: Remove pub use globs)
Closes#13530 (test: Un-ignore smallest-hello-world.rs)
Closes#13529 (std: Un-ignore some float tests on windows)
Closes#13528 (green: Add a helper macro for booting libgreen)
Closes#13526 (Remove RUST_LOG="::help" from the docs)
Closes#13524 (dist: Make Windows installer uninstall first. Closes#9563)
Closes#13521 (Change AUTHORS section in the man pages)
Closes#13519 (Update GitHub's Rust projects page.)
Closes#13518 (mk: Change windows to install from stage2)
Closes#13516 (liburl doc: insert missing hyphen)
Closes#13514 (rustdoc: Better sorting criteria for searching.)
Closes#13512 (native: Fix a race in select())
Closes#13506 (Use the unsigned integer types for bitwise intrinsics.)
Closes#13502 (Add a default impl for Set::is_superset)
Previously, if statements of the form "Foo;" or "let _ = Foo;" were encountered
where Foo had a destructor, the destructors were not run. This changes
the relevant locations in trans to check for ty::type_needs_drop and invokes
trans_to_lvalue instead of trans_into.
Closes#4734Closes#6892
Exposing ctpop, ctlz, cttz and bswap as taking signed i8/i16/... is just
exposing the internal LLVM names pointlessly (LLVM doesn't have "signed
integers" or "unsigned integers", it just has sized integer types
with (un)signed *operations*).
These operations are semantically working with raw bytes, which the
unsigned types model better.
During selection, libnative would erroneously re-acquire ownership of a task
when a separate thread still had ownership of the task. The loop in select()
was rewritten to acknowledge this race and instead block waiting to re-acquire
ownership rather than plowing through.
Closes#13494
Fixes#13507.
I haven't familiarized myself with this part of the rust compiler, so hopefully there are no mistakes (despite the simplicity of the commit). It is also 5am.
This includes a change to the way lifetime names are generated. Say we
figure that `[#0, 'a, 'b]` have to be the same lifetimes, then instead
of just generating a new lifetime `'c` like before to replace them, we
would reuse `'a`. This is done so that when the lifetime name comes
from an impl, we don't give something that's completely off, and we
don't have to do much work to figure out where the name came from. For
example, for the following code snippet:
```rust
struct Baz<'x> {
bar: &'x int
}
impl<'x> Baz<'x> {
fn baz1(&self) -> &int {
self.bar
}
}
```
`[#1, 'x]` (where `#1` is BrAnon(1) and refers to lifetime of `&int`)
have to be marked the same lifetime. With the old method, we would
generate a new lifetime `'a` and suggest `fn baz1(&self) -> &'a int`
or `fn baz1<'a>(&self) -> &'a int`, both of which are wrong.
Before, the `--crate-file-name` flag only checked crate attributes for
possible crate types. Now, if any type is specified by one or more
`--crate-type` flags, only the filenames for those types will be
emitted, and any types specified by crate attributes will be ignored.
Before, normal compilation and the --crate-file-name flag would
generate output based on both #![crate_type] attributes and
--crate-type flags. Now, if one or more flag is specified by command
line, only those will be used.
Closes#11573.
This bug was introduced in #13384 by accident, and this commit continues the
work of #13384 by finishing support for loading a syntax extension crate without
registering it with the local cstore.
Closes#13495
A mismatched type with more type parameters than the expected one causes
`typeck` looking up out of the bound of type parameter vector, which
leads to ICE.
Closes#13466
This bug was introduced in #13384 by accident, and this commit continues the
work of #13384 by finishing support for loading a syntax extension crate without
registering it with the local cstore.
Closes#13495
Previously, upstream C libraries were linked in a nondeterministic fashion
because they were collected through iter_crate_data() which is a nodeterministic
traversal of a hash map. When upstream rlibs had interdependencies among their
native libraries (such as libfoo depending on libc), then the ordering would
occasionally be wrong, causing linkage to fail.
This uses the topologically sorted list of libraries to collect native
libraries, so if a native library depends on libc it just needs to make sure
that the rust crate depends on liblibc.
There are currently a number of return values from the std::comm methods, not
all of which are necessarily completely expressive:
* `Sender::try_send(t: T) -> bool`
This method currently doesn't transmit back the data `t` if the send fails
due to the other end having disconnected. Additionally, this shares the name
of the synchronous try_send method, but it differs in semantics in that it
only has one failure case, not two (the buffer can never be full).
* `SyncSender::try_send(t: T) -> TrySendResult<T>`
This method accurately conveys all possible information, but it uses a
custom type to the std::comm module with no convenience methods on it.
Additionally, if you want to inspect the result you're forced to import
something from `std::comm`.
* `SyncSender::send_opt(t: T) -> Option<T>`
This method uses Some(T) as an "error value" and None as a "success value",
but almost all other uses of Option<T> have Some/None the other way
* `Receiver::try_recv(t: T) -> TryRecvResult<T>`
Similarly to the synchronous try_send, this custom return type is lacking in
terms of usability (no convenience methods).
With this number of drawbacks in mind, I believed it was time to re-work the
return types of these methods. The new API for the comm module is:
Sender::send(t: T) -> ()
Sender::send_opt(t: T) -> Result<(), T>
SyncSender::send(t: T) -> ()
SyncSender::send_opt(t: T) -> Result<(), T>
SyncSender::try_send(t: T) -> Result<(), TrySendError<T>>
Receiver::recv() -> T
Receiver::recv_opt() -> Result<T, ()>
Receiver::try_recv() -> Result<T, TryRecvError>
The notable changes made are:
* Sender::try_send => Sender::send_opt. This renaming brings the semantics in
line with the SyncSender::send_opt method. An asychronous send only has one
failure case, unlike the synchronous try_send method which has two failure
cases (full/disconnected).
* Sender::send_opt returns the data back to the caller if the send is guaranteed
to fail. This method previously returned `bool`, but then it was unable to
retrieve the data if the data was guaranteed to fail to send. There is still a
race such that when `Ok(())` is returned the data could still fail to be
received, but that's inherent to an asynchronous channel.
* Result is now the basis of all return values. This not only adds lots of
convenience methods to all return values for free, but it also means that you
can inspect the return values with no extra imports (Ok/Err are in the
prelude). Additionally, it's now self documenting when something failed or not
because the return value has "Err" in the name.
Things I'm a little uneasy about:
* The methods send_opt and recv_opt are not returning options, but rather
results. I felt more strongly that Option was the wrong return type than the
_opt prefix was wrong, and I coudn't think of a much better name for these
methods. One possible way to think about them is to read the _opt suffix as
"optionally".
* Result<T, ()> is often better expressed as Option<T>. This is only applicable
to the recv_opt() method, but I thought it would be more consistent for
everything to return Result rather than one method returning an Option.
Despite my two reasons to feel uneasy, I feel much better about the consistency
in return values at this point, and I think the only real open question is if
there's a better suffix for {send,recv}_opt.
Closes#11527
This patch fixes issue #13186.
When generating constant expression for enum, it is possible that
alignment of expression may be not equal to alignment of type. In that
case space after last struct field must be padded to match size of value
and size of struct. This commit adds that padding.
See detailed explanation in src/test/run-pass/trans-tag-static-padding.rs
A mismatched type with more type parameters than the expected one causes
`typeck` looking up out of the bound of type parameter vector, which
leads to ICE.
Closes#13466
Previously, upstream C libraries were linked in a nondeterministic fashion
because they were collected through iter_crate_data() which is a nodeterministic
traversal of a hash map. When upstream rlibs had interdependencies among their
native libraries (such as libfoo depending on libc), then the ordering would
occasionally be wrong, causing linkage to fail.
This uses the topologically sorted list of libraries to collect native
libraries, so if a native library depends on libc it just needs to make sure
that the rust crate depends on liblibc.
There are currently a number of return values from the std::comm methods, not
all of which are necessarily completely expressive:
Sender::try_send(t: T) -> bool
This method currently doesn't transmit back the data `t` if the send fails
due to the other end having disconnected. Additionally, this shares the name
of the synchronous try_send method, but it differs in semantics in that it
only has one failure case, not two (the buffer can never be full).
SyncSender::try_send(t: T) -> TrySendResult<T>
This method accurately conveys all possible information, but it uses a
custom type to the std::comm module with no convenience methods on it.
Additionally, if you want to inspect the result you're forced to import
something from `std::comm`.
SyncSender::send_opt(t: T) -> Option<T>
This method uses Some(T) as an "error value" and None as a "success value",
but almost all other uses of Option<T> have Some/None the other way
Receiver::try_recv(t: T) -> TryRecvResult<T>
Similarly to the synchronous try_send, this custom return type is lacking in
terms of usability (no convenience methods).
With this number of drawbacks in mind, I believed it was time to re-work the
return types of these methods. The new API for the comm module is:
Sender::send(t: T) -> ()
Sender::send_opt(t: T) -> Result<(), T>
SyncSender::send(t: T) -> ()
SyncSender::send_opt(t: T) -> Result<(), T>
SyncSender::try_send(t: T) -> Result<(), TrySendError<T>>
Receiver::recv() -> T
Receiver::recv_opt() -> Result<T, ()>
Receiver::try_recv() -> Result<T, TryRecvError>
The notable changes made are:
* Sender::try_send => Sender::send_opt. This renaming brings the semantics in
line with the SyncSender::send_opt method. An asychronous send only has one
failure case, unlike the synchronous try_send method which has two failure
cases (full/disconnected).
* Sender::send_opt returns the data back to the caller if the send is guaranteed
to fail. This method previously returned `bool`, but then it was unable to
retrieve the data if the data was guaranteed to fail to send. There is still a
race such that when `Ok(())` is returned the data could still fail to be
received, but that's inherent to an asynchronous channel.
* Result is now the basis of all return values. This not only adds lots of
convenience methods to all return values for free, but it also means that you
can inspect the return values with no extra imports (Ok/Err are in the
prelude). Additionally, it's now self documenting when something failed or not
because the return value has "Err" in the name.
Things I'm a little uneasy about:
* The methods send_opt and recv_opt are not returning options, but rather
results. I felt more strongly that Option was the wrong return type than the
_opt prefix was wrong, and I coudn't think of a much better name for these
methods. One possible way to think about them is to read the _opt suffix as
"optionally".
* Result<T, ()> is often better expressed as Option<T>. This is only applicable
to the recv_opt() method, but I thought it would be more consistent for
everything to return Result rather than one method returning an Option.
Despite my two reasons to feel uneasy, I feel much better about the consistency
in return values at this point, and I think the only real open question is if
there's a better suffix for {send,recv}_opt.
Closes#11527
libstd: Implement `StrBuf`, a new string buffer type like `Vec`, and port all code over to use it.
Rebased & tests-fixed version of https://github.com/mozilla/rust/pull/13269
Previously, a private use statement would shadow a public use statement, all of
a sudden publicly exporting the privately used item. The correct behavior here
is to only shadow the use for the module in question, but for now it just
reverts the entire name to private so the pub use doesn't have much effect.
The behavior isn't exactly what we want, but this no longer has backwards
compatibility hazards.
Previously resolve was checking the "import resolution" for whether an import
had succeeded or not, but this was the same structure filled in by a previous
import if a name is shadowed. Instead, this alters resolve to consult the local
resolve state (as opposed to the shared one) to test whether an import succeeded
or not.
Closes#13404
Resolve is currently erroneously allowing imports through private `use`
statements in some circumstances, even across module boundaries. For example,
this code compiles successfully today:
use std::c_str;
mod test {
use c_str::CString;
}
This should not be allowed because it was explicitly decided that private `use`
statements are purely bringing local names into scope, they are not
participating further in name resolution.
As a consequence of this patch, this code, while valid today, is now invalid:
mod test {
use std::c_str;
unsafe fn foo() {
::test::c_str::CString::new(0 as *u8, false);
}
}
While plausibly acceptable, I found it to be more consistent if private imports
were only considered candidates to resolve the first component in a path, and no
others.
Closes#12612
I think that the test case from this issue has become out of date with resolve
changes in the past 9 months, and it's not entirely clear to me what the
original bug was.
Regardless, it seems like tricky resolve behavior, so tests were added to make
sure things resolved correctly and warnings were correctly reported.
Closes#7663
This fixes the categorization of the upvars of procs (represented internally
as once fns) to consider usage to require a loan. In doing so, upvars are no
longer allowed to be moved out of repeatedly in loops and such.
Closes#10398Closes#12041Closes#12127
This commit changes the way move errors are reported when some value is
captured by a PatIdent. First, we collect all of the "cannot move out
of" errors before reporting them, and those errors with the same "move
source" are reported together. If the move is caused by a PatIdent (that
binds by value), we add a note indicating where it is and suggest the
user to put `ref` if they don't want the value to move. This makes the
"cannot move out of" error in match expression nicer (though the extra
note may not feel that helpful in other places :P). For example, with
the following code snippet,
```rust
enum Foo {
Foo1(~u32, ~u32),
Foo2(~u32),
Foo3,
}
fn main() {
let f = &Foo1(~1u32, ~2u32);
match *f {
Foo1(num1, num2) => (),
Foo2(num) => (),
Foo3 => ()
}
}
```
Errors before the change:
```rust
test.rs:10:9: 10:25 error: cannot move out of dereference of `&`-pointer
test.rs:10 Foo1(num1, num2) => (),
^~~~~~~~~~~~~~~~
test.rs:10:9: 10:25 error: cannot move out of dereference of `&`-pointer
test.rs:10 Foo1(num1, num2) => (),
^~~~~~~~~~~~~~~~
test.rs:11:9: 11:18 error: cannot move out of dereference of `&`-pointer
test.rs:11 Foo2(num) => (),
^~~~~~~~~
```
After:
```rust
test.rs:9:11: 9:13 error: cannot move out of dereference of `&`-pointer
test.rs:9 match *f {
^~
test.rs:10:14: 10:18 note: attempting to move value to here (to prevent the move, you can use `ref num1` to capture value by reference)
test.rs:10 Foo1(num1, num2) => (),
^~~~
test.rs:10:20: 10:24 note: and here (use `ref num2`)
test.rs:10 Foo1(num1, num2) => (),
^~~~
test.rs:11:14: 11:17 note: and here (use `ref num`)
test.rs:11 Foo2(num) => (),
^~~
```
Close#8064
This fixes the categorization of the upvars of procs (represented internally
as once fns) to consider usage to require a loan. In doing so, upvars are no
longer allowed to be moved out of repeatedly in loops and such.
Closes#10398Closes#12041Closes#12127
Previously, if statements of the form "Foo;" or "let _ = Foo;" were encountered
where Foo had a destructor, the destructors were not run. This changes
the relevant locations in trans to check for ty::type_needs_drop and invokes
trans_to_lvalue instead of trans_into.
Closes#4734Closes#6892
Apparently windows doesn't like reading from stdin with a large buffer size, and
it also apparently is ok with a smaller buffer size. This changes the reader
returned by stdin() to return an 8k buffered reader for stdin rather than a 64k
buffered reader.
Apparently libuv has run into this before, taking a peek at their code, with a
specific comment in their console code saying that "ReadConsole can't handle big
buffers", which I presume is related to invoking ReadFile as if it were a file
descriptor.
Closes#13304
Few places where previous version of tidy script cannot find XXX:
* inside one-line comment preceding by a few spaces;
* inside multiline comments (now it finds it if multiline comment starts
on the same line with XXX).
Change occurences of XXX found by new tidy script.
... also don't read the whole directory if the glob for that path
component doesn't contain any metacharacters.
Patterns like `../*.jpg` will work now, and `.*` will match both `.` and
`..` to be consistent with shell expansion.
As before: Just `*` still won't match `.` and `..`, while it will still
match dotfiles like `.git` by default.
`Reader`, `Writer`, `MemReader`, `MemWriter`, and `MultiWriter` now work with `Vec<u8>` instead of `~[u8]`. This does introduce some extra copies since `from_utf8_owned` isn't usable anymore, but I think that can't be helped until `~str`'s representation changes.
In the error message for when a private field is used, include the name of the struct, or if it's a struct-like enum variant, the names of the variant and the enum.
This fixes#13341.
Rust currently defaults to `RelocPIC` regardless. This patch adds a new
codegen option that allows choosing different relocation-model. The
available models are:
- default (Use the target-specific default model)
- static
- pic
- no-pic
For a more detailed information use `llc --help`
Rust currently defaults to `RelocPIC` regardless. This patch adds a new
codegen option that allows choosing different relocation-model. The
available models are:
- default (Use the target-specific default model)
- static
- pic
- no-pic
For a more detailed information use `llc --help`
In summary these are some example transitions this change makes:
'a || => ||: 'a
proc:Send() => proc():Send
The intended syntax for closures is to put the lifetime bound not at the front
but rather in the list of bounds. Currently there is no official support in the
AST for bounds that are not 'static, so this case is currently specially handled
in the parser to desugar to what the AST is expecting. Additionally, this moves
the bounds on procedures to the correct position, which is after the argument
list.
The current grammar for closures and procedures is:
procedure := 'proc' [ '<' lifetime-list '>' ] '(' arg-list ')'
[ ':' bound-list ] [ '->' type ]
closure := [ 'unsafe' ] ['<' lifetime-list '>' ] '|' arg-list '|'
[ ':' bound-list ] [ '->' type ]
lifetime-list := lifetime | lifetime ',' lifetime-list
arg-list := ident ':' type | ident ':' type ',' arg-list
bound-list := bound | bound '+' bound-list
bound := path | lifetime
This does not currently handle the << ambiguity in `Option<<'a>||>`, I am
deferring that to a later patch. Additionally, this removes the support for the
obsolete syntaxes of ~fn and &fn.
Closes#10553Closes#10767Closes#11209Closes#11210Closes#11211
This can be a frustrating error message, ideally we should print the signature mismatch, but hinting that it's a trait incompatibility helps tracking root cause. Also beefed up the testcases for this.
Ideally we would print the signature mismatch in the error helper?
rustc: move the check_loop pass earlier.
This pass is purely AST based, and by running it earlier we emit more
useful error messages, e.g. type inference fails in the case of
`let r = break;` with few constraints on `r`, but it's more useful to be told that
the `break` is outside the loop (rather than a type error) when it is.
Closes#13292.
Fix#13266.
There is a little bit of acrobatics in the definition of `crate_paths`
to avoid calling `clone()` on the dylib/rlib unless we actually are
going to need them.
The other oddity is that I have replaced the `root_ident: Option<&str>`
parameter with a `root: &Option<CratePaths>`, which may surprise one
who was expecting to see something like: `root: Option<&CratePaths>`.
I went with the approach here because I could not come up with code for
the alternative that was acceptable to the borrow checker.
All it checks, unfortunately, is that you actually printed at least
two lines for crateA paths and at least one line for crateB paths.
But that's enough to capture the spirit of the bug, I think. I did
not bother trying to verify that the paths themselves reflected where
the crates end up.
This pass is purely AST based, and by running it earlier we emit more
useful error messages, e.g. type inference fails in the case of `let r =
break;` with few constraints on `r`, but its more useful to be told that
the `break` is outside a loop (rather than a type error) when it is.
Closes#13292.
`RefCell::get` can be a bit surprising, because it actually clones the wrapped value. This removes `RefCell::get` and replaces all the users with `RefCell::borrow()` when it can, and `RefCell::borrow().clone()` when it can't. It removes `RefCell::set` for consistency. This closes#13182.
It also fixes an infinite loop in a test when debugging is on.
rustc: feature-gate `concat_idents!`.
concat_idents! is not as useful as it could be, due to macros only being
allowed in limited places, and hygiene, so lets feature gate it until we
make a decision about it.
cc #13294
This was missed when dropping the null-termination from our string
types. An explicit null byte can still be placed anywhere in a string if
desired, but there's no reason to stick one at the end of every string
constant.
concat_idents! is not as useful as it could be, due to macros only being
allowed in limited places, and hygiene, so lets feature gate it until we
make a decision about it.
cc #13294
This fixes#13238. It avoids an infinite loop when compiling
the tests with `-g`. Without this change, the debuginfo on
`black_box` prevents the method from being inlined, which
allows llvm to convert `silent_recurse` into a tail-call. This
then loops forever instead of consuming all the stack like it
is supposed to. This patch forces inlining `black_box`, which
triggers the right error.
It's surprising that `RefCell::get()` is implicitly doing a clone
on a value. This patch removes it and replaces all users with
either `.borrow()` when we can autoderef, or `.borrow().clone()`
when we cannot.
This was missed when dropping the null-termination from our string
types. An explicit null byte can still be placed anywhere in a string if
desired, but there's no reason to stick one at the end of every string
constant.
Closes#13285 (rustc: Stop using LLVMGetSectionName)
Closes#13280 (std: override clone_from for Vec.)
Closes#13277 (serialize: add a few missing pubs to base64)
Closes#13275 (Add and remove some ignore-win32 flags)
Closes#13273 (Removed managed boxes from libarena.)
Closes#13270 (Minor copy-editing for the tutorial)
Closes#13267 (fix Option<~ZeroSizeType>)
Closes#13265 (Update emacs mode to support new `#![inner(attribute)]` syntax.)
Closes#13263 (syntax: Remove AbiSet, use one Abi)
This commit tightens up the restriction on types used to index slices to require
exactly `uint` indices. Previously any integral type was accepted, but this
leads to a few subtle problems:
* 64-bit indices don't make much sense on 32-bit systems
* Signed indices for slices used as negative indexing isn't implemented
This was discussed at the recent work week, and also has some discussion on
issue #10453.
Closes#10453
move errno -> IoError converter into std, bubble up OSRng errors
Also adds a general errno -> `~str` converter to `std::os`, and makes the failure messages for the things using `OSRng` (e.g. (transitively) the task-local RNG, meaning hashmap initialisation failures aren't such a black box).
The various ...Rng::new() methods can hit IO errors from the OSRng they use,
and it seems sensible to expose them at a higher level. Unfortunately, writing
e.g. `StdRng::new().unwrap()` gives a much poorer error message than if it
failed internally, but this is a problem with all `IoResult`s.
This commit deals with the fallout of the previous change by making tuples
structs have public fields where necessary (now that the fields are private by
default).
This is a continuation of the work done in #13184 to make struct fields private
by default. This commit finishes RFC 4 by making all tuple structs have private
fields by default. Note that enum variants are not affected.
A tuple struct having a private field means that it cannot be matched on in a
pattern match (both refutable and irrefutable), and it also cannot have a value
specified to be constructed. Similarly to private fields, switching the type of
a private field in a tuple struct should be able to be done in a backwards
compatible way.
The one snag that I ran into which wasn't mentioned in the RFC is that this
commit also forbids taking the value of a tuple struct constructor. For example,
this code now fails to compile:
mod a {
pub struct A(int);
}
let a: fn(int) -> a::A = a::A; //~ ERROR: first field is private
Although no fields are bound in this example, it exposes implementation details
through the type itself. For this reason, taking the value of a struct
constructor with private fields is forbidden (outside the containing module).
RFC: 0004-private-fields
`collections::list::List` was decided in a [team meeting](https://github.com/mozilla/rust/wiki/Meeting-weekly-2014-03-25) that it was unnecessary, so this PR removes it. Additionally, it removes an old and redundant purity test and fixes some warnings.
Only supports crate level statics. No debug info is generated for function level statics. Closes#9227.
As discussed at the end of the comments for #9227, I took an initial stab at adding support for function level statics and decided it would be enough work to warrant being split into a separate issue.
See #13144 for the new issue describing the need to add support for function level static variables.
r? @nikomatsakis
Fix#13140
Includes two fixes, and a semi-thorough regression test.
(There is another set of tests that I linked from #5121, but those are sort of all over the place, while the ones I've included here are more directly focused on the issues at hand.)
This removes the `attr` matcher and adds a `meta` matcher. The previous `attr`
matcher is now ambiguous because it doesn't disambiguate whether it means inner
attribute or outer attribute.
The new behavior can still be achieved by taking an argument of the form
`#[$foo:meta]` (the brackets are part of the macro pattern).
Closes#13067
Some unix platforms will send a SIGPIPE signal instead of returning EPIPE from a
syscall by default. The native runtime doesn't install a SIGPIPE handler,
causing the program to die immediately in this case. This brings the behavior in
line with libgreen by ignoring SIGPIPE and propagating EPIPE upwards to the
application in the form of an IoError.
Closes#13123
Some unix platforms will send a SIGPIPE signal instead of returning EPIPE from a
syscall by default. The native runtime doesn't install a SIGPIPE handler,
causing the program to die immediately in this case. This brings the behavior in
line with libgreen by ignoring SIGPIPE and propagating EPIPE upwards to the
application in the form of an IoError.
Closes#13123
It turns out that on linux, and possibly other platforms, child processes will
continue to accept signals until they have been *reaped*. This means that once
the child has exited, it will succeed to receive signals until waitpid() has
been invoked on it.
This is unfortunate behavior, and differs from what is seen on OSX and windows.
This commit changes the behavior of Process::signal() to be the same across
platforms, and updates the documentation of Process::kill() to note that when
signaling a foreign process it may accept signals until reaped.
Implementation-wise, this invokes waitpid() with WNOHANG before each signal to
the child to ensure that if the child has exited that we will reap it. Other
possibilities include installing a SIGCHLD signal handler, but at this time I
believe that that's too complicated.
Closes#13124
Summary:
So far, we've used the term POD "Plain Old Data" to refer to types that
can be safely copied. However, this term is not consistent with the
other built-in bounds that use verbs instead. This patch renames the Pod
kind into Copy.
RFC: 0003-opt-in-builtin-traits
Test Plan: make check
Reviewers: cmr
Differential Revision: http://phabricator.octayn.net/D3
This bench is meant to exercise libgreen, not libnative. It recently caused the
auto-linux-32-nopt-t bot to fail as no output was produced for an hour.
We really do *not* want TCO to kick in. If it does, we'll never blow the
stack, and never trigger the condition the test is checking for. To that end,
do a meaningless alloc that serves only to get a destructor to run. The
addition of nocapture/noalias seems to have let LLVM do more TCO, which
hurt this testcase.
The `_match.rs` takes advantage of passes prior to `trans` and
aggressively prunes the sub-match tree based on exact equality. When it
comes to literal or range, the strategy may lead to wrong result if
there's guard function or multiple patterns inside tuple.
Closes#12582.
Closes#13027.
The `_match.rs` takes advantage of passes prior to `trans` and
aggressively prunes the sub-match tree based on exact equality. When it
comes to literal or range, the strategy may lead to wrong result if
there's guard function or multiple patterns inside tuple.
Closes#12582.
Closes#13027.
The previous syntax was `Foo:Bound<trait-parameters>`, but this is a little
ambiguous because it was being parsed as `Foo: (Bound<trait-parameters)` rather
than `Foo: (Bound) <trait-parameters>`
This commit changes the syntax to `Foo<trait-parameters>: Bound` in order to be
clear where the trait parameters are going.
Closes#9265
This change prepares `rustc` to accept private fields by default. These changes will have to go through a snapshot before the rest of the changes can happen.