This pull request extracts all scheduling functionality from libstd, moving it into its own separate crates. The new libnative and libgreen will be the new way in which 1:1 and M:N scheduling is implemented. The standard library still requires an interface to the runtime, however, (think of things like `std::comm` and `io::println`). The interface is now defined by the `Runtime` trait inside of `std::rt`.
The booting process is now that libgreen defines the start lang-item and that's it. I want to extend this soon to have libnative also have a "start lang item" but also allow libgreen and libnative to be linked together in the same process. For now though, only libgreen can be used to start a program (unless you define the start lang item yourself). Again though, I want to change this soon, I just figured that this pull request is large enough as-is.
This certainly wasn't a smooth transition, certain functionality has no equivalent in this new separation, and some functionality is now better enabled through this new system. I did my best to separate all of the commits by topic and keep things fairly bite-sized, although are indeed larger than others.
As a note, this is currently rebased on top of my `std::comm` rewrite (or at least an old copy of it), but none of those commits need reviewing (that will all happen in another pull request).
* vec::raw::to_ptr is gone
* Pausible => Pausable
* Removing @
* Calling the main task "<main>"
* Removing unused imports
* Removing unused mut
* Bringing some libextra tests up to date
* Allowing compiletest to work at stage0
* Fixing the bootstrap-from-c rmake tests
* assert => rtassert in a few cases
* printing to stderr instead of stdout in fail!()
This measure is simply to allow programs to continue compiling as they once did.
In the future, this needs a more robust solution to choose how to start with
libgreen or libnative.
For now, this moves the following modules to std::sync
* UnsafeArc (also removed unwrap method)
* mpsc_queue
* spsc_queue
* atomics
* mpmc_bounded_queue
* deque
We may want to remove some of the queues, but for now this moves things out of
std::rt into std::sync
This uses quite a bit of unsafe code for speed and failure safety, and allocates `2*n` temporary storage.
[Performance](https://gist.github.com/huonw/5547f2478380288a28c2):
| n | new | priority_queue | quick3 |
|-------:|---------:|---------------:|---------:|
| 5 | 200 | 155 | 106 |
| 100 | 6490 | 8750 | 5810 |
| 10000 | 1300000 | 1790000 | 1060000 |
| 100000 | 16700000 | 23600000 | 12700000 |
| sorted | 520000 | 1380000 | 53900000 |
| trend | 1310000 | 1690000 | 1100000 |
(The times are in nanoseconds, having subtracted the set-up time (i.e. the `just_generate` bench target).)
I imagine that there is still significant room for improvement, particularly because both priority_queue and quick3 are doing a static call via `Ord` or `TotalOrd` for the comparisons, while this is using a (boxed) closure.
Also, this code does not `clone`, unlike `quick_sort3`; and is stable, unlike both of the others.
Trap the io_error condition so that a more informative error message is
displayed when the linker program cannot be started, such as when the
name of the linker binary is accidentally mistyped.
closes#10755
Right now the --crate-id and related flags are all process *after* the entire
crate is parsed. This is less than desirable when used with makefiles because it
means that just to learn the output name of the crate you have to parse the
entire crate (unnecessary).
This commit changes the behavior to lift the handling of these flags much sooner
in the compilation process. This allows us to not have to parse the entire crate
and only have to worry about parsing the crate attributes themselves. The
related methods have all been updated to take an array of attributes rather than
a crate.
Additionally, this ceases duplication of the "what output are we producing"
logic in order to correctly handle things in the case of --test.
Finally, this adds tests for all of this functionality to ensure that it does
not regress.
Right now the --crate-id and related flags are all process *after* the entire
crate is parsed. This is less than desirable when used with makefiles because it
means that just to learn the output name of the crate you have to parse the
entire crate (unnecessary).
This commit changes the behavior to lift the handling of these flags much sooner
in the compilation process. This allows us to not have to parse the entire crate
and only have to worry about parsing the crate attributes themselves. The
related methods have all been updated to take an array of attributes rather than
a crate.
Additionally, this ceases duplication of the "what output are we producing"
logic in order to correctly handle things in the case of --test.
Finally, this adds tests for all of this functionality to ensure that it does
not regress.
We were previously reading metadata via `ar p`, but as learned from rustdoc
awhile back, spawning a process to do something is pretty slow. Turns out LLVM
has an Archive class to read archives, but it cannot write archives.
This commits adds bindings to the read-only version of the LLVM archive class
(with a new type that only has a read() method), and then it uses this class
when reading the metadata out of rlibs. When you put this in tandem of not
compressing the metadata, reading the metadata is 4x faster than it used to be
The timings I got for reading metadata from the respective libraries was:
libstd-04ff901e-0.9-pre.dylib => 100ms
libstd-04ff901e-0.9-pre.rlib => 23ms
librustuv-7945354c-0.9-pre.dylib => 4ms
librustuv-7945354c-0.9-pre.rlib => 1ms
librustc-5b94a16f-0.9-pre.dylib => 87ms
librustc-5b94a16f-0.9-pre.rlib => 35ms
libextra-a6ebb16f-0.9-pre.dylib => 63ms
libextra-a6ebb16f-0.9-pre.rlib => 15ms
libsyntax-2e4c0458-0.9-pre.dylib => 86ms
libsyntax-2e4c0458-0.9-pre.rlib => 22ms
In order to always take advantage of these faster metadata read-times, I sort
the files in filesearch based on whether they have an rlib extension or not
(prefer all rlib files first).
Overall, this halved the compile time for a `fn main() {}` crate from 0.185s to
0.095s on my system (when preferring dynamic linking). Reading metadata is still
the slowest pass of the compiler at 0.035s, but it's getting pretty close to
linking at 0.021s! The next best optimization is to just not copy the metadata
from LLVM because that's the most expensive part of reading metadata right now.
We were previously reading metadata via `ar p`, but as learned from rustdoc
awhile back, spawning a process to do something is pretty slow. Turns out LLVM
has an Archive class to read archives, but it cannot write archives.
This commits adds bindings to the read-only version of the LLVM archive class
(with a new type that only has a read() method), and then it uses this class
when reading the metadata out of rlibs. When you put this in tandem of not
compressing the metadata, reading the metadata is 4x faster than it used to be
The timings I got for reading metadata from the respective libraries was:
libstd-04ff901e-0.9-pre.dylib => 100ms
libstd-04ff901e-0.9-pre.rlib => 23ms
librustuv-7945354c-0.9-pre.dylib => 4ms
librustuv-7945354c-0.9-pre.rlib => 1ms
librustc-5b94a16f-0.9-pre.dylib => 87ms
librustc-5b94a16f-0.9-pre.rlib => 35ms
libextra-a6ebb16f-0.9-pre.dylib => 63ms
libextra-a6ebb16f-0.9-pre.rlib => 15ms
libsyntax-2e4c0458-0.9-pre.dylib => 86ms
libsyntax-2e4c0458-0.9-pre.rlib => 22ms
In order to always take advantage of these faster metadata read-times, I sort
the files in filesearch based on whether they have an rlib extension or not
(prefer all rlib files first).
Overall, this halved the compile time for a `fn main() {}` crate from 0.185s to
0.095s on my system (when preferring dynamic linking). Reading metadata is still
the slowest pass of the compiler at 0.035s, but it's getting pretty close to
linking at 0.021s! The next best optimization is to just not copy the metadata
from LLVM because that's the most expensive part of reading metadata right now.
Now that the metadata is an owned value with a lifetime of a borrowed byte
slice, it's possible to have future optimizations where the metadata doesn't
need to be copied around (very expensive operation).
Now that the metadata is an owned value with a lifetime of a borrowed byte
slice, it's possible to have future optimizations where the metadata doesn't
need to be copied around (very expensive operation).
For `str.as_mut_buf`, un-closure-ification is achieved by outright removal (see commit message). The others are replaced by `.as_ptr`, `.as_mut_ptr` and `.len`
Trap the io_error condition so that a more informative error message is
displayed when the linker program cannot be started, such as when the
name of the linker binary is accidentally mistyped.
closes#10755
As the title says. The trans changes will lead to an auxiliary alloca being created that allows debug info to track the `self` argument. This alloca is only created in debug builds however. Otherwise very little had to be done after I managed to navigate to some degree the jungle that is self-argument handling `:P`
Closes#10549
When a borrow occurs twice illegally, Rust will label the other borrow
as the "second borrow". This is quite confusing, as the "second borrow"
usually happened before the flagged barrow (e.g. as far as dataflow
is concerned, the first borrow is OK, the second borrow is illegal.)
This patch renames "second borrow" to "previous borrow", to make the
spatial relationship between the two borrows clearer.
Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
This code in resolve accidentally forced all types with an impl to become
public. This fixes it by default inheriting the privacy of what was previously
there and then becoming `true` if nothing else exits.
Closes#10545
By performing this logic very late in the build process, it ended up leading to
bugs like those found in #10973 where certain stages of the build process
expected a particular output format which didn't end up being the case. In order
to fix this, the build output generation is moved very early in the build
process to the absolute first thing in phase 2.
Closes#10973
This code in resolve accidentally forced all types with an impl to become
public. This fixes it by default inheriting the privacy of what was previously
there and then becoming `true` if nothing else exits.
Closes#10545
When a borrow occurs twice illegally, Rust will label the other borrow
as the "second borrow". This is quite confusing, as the "second borrow"
usually happened before the flagged borrow (e.g. as far as dataflow
is concerned, the first borrow is OK, the second borrow is illegal.)
This patch renames "second borrow" to "previous borrow", to make the
spatial relationship between the two borrows clearer.
Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
If it's a trait method, this checks the stability attribute of the
method inside the trait definition. Otherwise, it checks the method
implementation itself.
Close#8961.
This pull request completely rewrites std::comm and all associated users. Some major bullet points
* Everything now works natively
* oneshots have been removed
* shared ports have been removed
* try_recv no longer blocks (recv_opt blocks)
* constructors are now Chan::new and SharedChan::new
* failure is propagated on send
* stream channels are 3x faster
I have acquired the following measurements on this patch. I compared against Go, but remember that Go's channels are fundamentally different than ours in that sends are by-default blocking. This means that it's not really a totally fair comparison, but it's good to see ballpark numbers for anyway
```
oneshot stream shared1
std 2.111 3.073 1.730
my 6.639 1.037 1.238
native 5.748 1.017 1.250
go8 1.774 3.575 2.948
go8-inf slow 0.837 1.376
go8-128 4.832 1.430 1.504
go1 1.528 1.439 1.251
go2 1.753 3.845 3.166
```
I had three benchmarks:
* oneshot - N times, create a "oneshot channel", send on it, then receive on it (no task spawning)
* stream - N times, send from one task to another task, wait for both to complete
* shared1 - create N threads, each of which sends M times, and a port receives N*M times.
The rows are as follows:
* `std` - the current libstd implementation (before this pull request)
* `my` - this pull request's implementation (in M:N mode)
* `native` - this pull request's implementation (in 1:1 mode)
* `goN` - go's implementation with GOMAXPROCS=N. The only relevant value is 8 (I had 8 cores on this machine)
* `goN-X` - go's implementation where the channels in question were created with buffers of size `X` to behave more similarly to rust's channels.
If it's a trait method, this checks the stability attribute of the
method inside the trait definition. Otherwise, it checks the method
implementation itself.
This PR improves the stepping experience in GDB. It contains some fine tuning of line information and makes *rustc* produce nearly the same IR/DWARF as Clang. The focus of the changes is function prologue handling which has caused some problems in the past (https://github.com/mozilla/rust/issues/9641).
It seems that GDB does not properly handle function prologues when the function uses segmented stacks, i.e. it does not recognize that the `__morestack` check is part of the prologue. When setting a breakpoint like `break foo` it will set the break point before the arguments of `foo()` have been loaded and still contain bogus values. For function with the #[no_split_stack] attribute this problem has never occurred for me so I'm pretty sure that segmented stacks are the cause of the problem. @jdm mentioned that segmented stack won't be completely abandoned after all. I'd be grateful if you could tell me about what the future might bring in this regard (@brson, @cmr).
Anyway, this PR should alleviate this problem at least in the case when setting breakpoints using line numbers and also make it less confusing when setting them via function names because then GDB will break *before* the first statement where one could conceivably argue that arguments need not be initialized yet.
Also, a koala: 🐨
Cheers,
Michael
By performing this logic very late in the build process, it ended up leading to
bugs like those found in #10973 where certain stages of the build process
expected a particular output format which didn't end up being the case. In order
to fix this, the build output generation is moved very early in the build
process to the absolute first thing in phase 2.
Closes#10973
Understand 'pkgid' in stage0. As a bonus, the snapshot now contains now metadata
(now that those changes have landed), and the snapshot download is half as large
as it used to be!
The problem was that std::run::Process::new() was unwrap()ing the result
of std::io::process::Process::new(), which returns None in the case
where the io_error condition is raised to signal failure to start the
process.
Have std::run::Process::new() similarly return an Option\<run::Process\>
to reflect the fact that a subprocess might have failed to start. Update
utility functions run::process_status() and run::process_output() to
return Option\<ProcessExit\> and Option\<ProcessOutput\>, respectively.
Various parts of librustc and librustpkg needed to be updated to reflect
these API changes.
closes#10754
The problem was that std::run::Process::new() was unwrap()ing the result
of std::io::process::Process::new(), which returns None in the case
where the io_error condition is raised to signal failure to start the
process.
Have std::run::Process::new() similarly return an Option<run::Process>
to reflect the fact that a subprocess might have failed to start. Update
utility functions run::process_status() and run::process_output() to
return Option<ProcessExit> and Option<ProcessOutput>, respectively.
Various parts of librustc and librustpkg needed to be updated to reflect
these API changes.
closes#10754
When performing LTO, the rust compiler has an opportunity to completely strip
all landing pads in all dependent libraries. I've modified the LTO pass to
recognize the -Z no-landing-pads option when also running an LTO pass to flag
everything in LLVM as nothrow. I've verified that this prevents any and all
invoke instructions from being emitted.
I believe that this is one of our best options for moving forward with
accomodating use-cases where unwinding doesn't really make sense. This will
allow libraries to be built with landing pads by default but allow usage of them
in contexts where landing pads aren't necessary.
Turns out that one some platforms the ar/ranlib tool will die with an assertion
if the file being added doesn't actually have any symbols (or if it's just not
an object file presumably).
This functionality is already all exercised on the bots, it just turns out that
the bots don't have an ar tool which dies in this situation, so it's difficult
for me to add a test.
Closes#10907
When --dep-info is given, rustc will write out a `$input_base.d` file in the
output directory that contains Makefile compatible dependency information for
use with tools like make and ninja.
Turns out that one some platforms the ar/ranlib tool will die with an assertion
if the file being added doesn't actually have any symbols (or if it's just not
an object file presumably).
This functionality is already all exercised on the bots, it just turns out that
the bots don't have an ar tool which dies in this situation, so it's difficult
for me to add a test.
Closes#10907
When performing LTO, the rust compiler has an opportunity to completely strip
all landing pads in all dependent libraries. I've modified the LTO pass to
recognize the -Z no-landing-pads option when also running an LTO pass to flag
everything in LLVM as nothrow. I've verified that this prevents any and all
invoke instructions from being emitted.
I believe that this is one of our best options for moving forward with
accomodating use-cases where unwinding doesn't really make sense. This will
allow libraries to be built with landing pads by default but allow usage of them
in contexts where landing pads aren't necessary.
cc #10780
This replaces the link meta attributes with a pkgid attribute and uses a hash
of this as the crate hash. This makes the crate hash computable by things
other than the Rust compiler. It also switches the hash function ot SHA1 since
that is much more likely to be available in shell, Python, etc than SipHash.
Fixes#10188, #8523.
This replaces the link meta attributes with a pkgid attribute and uses a hash
of this as the crate hash. This makes the crate hash computable by things
other than the Rust compiler. It also switches the hash function ot SHA1 since
that is much more likely to be available in shell, Python, etc than SipHash.
Fixes#10188, #8523.
This bug showed up because the visitor only visited the path of the implemented
trait via walk_path (with no corresponding visit_path function). I have modified
the visitor to use visit_path (which is now overridable), and the privacy
visitor overrides this function and now properly checks for the privacy of all
paths.
Closes#10857
The first commit was approved from another pull request, but I wanted to rebase LTO on top of it.
LTO is not turned on by default at all, and it's hidden behind a `-Z` flag. I have added a few small tests for it, however.
This commit implements LTO for rust leveraging LLVM's passes. What this means
is:
* When compiling an rlib, in addition to insdering foo.o into the archive, also
insert foo.bc (the LLVM bytecode) of the optimized module.
* When the compiler detects the -Z lto option, it will attempt to perform LTO on
a staticlib or binary output. The compiler will emit an error if a dylib or
rlib output is being generated.
* The actual act of performing LTO is as follows:
1. Force all upstream libraries to have an rlib version available.
2. Load the bytecode of each upstream library from the rlib.
3. Link all this bytecode into the current LLVM module (just using llvm
apis)
4. Run an internalization pass which internalizes all symbols except those
found reachable for the local crate of compilation.
5. Run the LLVM LTO pass manager over this entire module
6a. If assembling an archive, then add all upstream rlibs into the output
archive. This ignores all of the object/bitcode/metadata files rust
generated and placed inside the rlibs.
6b. If linking a binary, create copies of all upstream rlibs, remove the
rust-generated object-file, and then link everything as usual.
As I have explained in #10741, this process is excruciatingly slow, so this is
*not* turned on by default, and it is also why I have decided to hide it behind
a -Z flag for now. The good news is that the binary sizes are about as small as
they can be as a result of LTO, so it's definitely working.
Closes#10741Closes#10740
Right now whenever an rlib file is linked against, all of the metadata from the
rlib is pulled in to the final staticlib or binary. The reason for this is that
the metadata is currently stored in a section of the object file. Note that this
is intentional for dynamic libraries in order to distribute metadata bundled
with static libraries.
This commit alters the situation for rlib libraries to instead store the
metadata in a separate file in the archive. In doing so, when the archive is
passed to the linker, none of the metadata will get pulled into the result
executable. Furthermore, the metadata file is skipped when assembling rlibs into
an archive.
The snag in this implementation comes with multiple output formats. When
generating a dylib, the metadata needs to be in the object file, but when
generating an rlib this needs to be separate. In order to accomplish this, the
metadata variable is inserted into an entirely separate LLVM Module which is
then codegen'd into a different location (foo.metadata.o). This is then linked
into dynamic libraries and silently ignored for rlib files.
While changing how metadata is inserted into archives, I have also stopped
compressing metadata when inserted into rlib files. We have wanted to stop
compressing metadata, but the sections it creates in object file sections are
apparently too large. Thankfully if it's just an arbitrary file it doesn't
matter how large it is.
I have seen massive reductions in executable sizes, as well as staticlib output
sizes (to confirm that this is all working).
Last LLVM update seems to have fixed whatever prevented LLVM integrated assembler from generating correct unwind tables on Windows. This PR switches Windows builds to use internal assembler by default.
Compilation via external assembler can still be requested via the newly added `-Z no-integrated-as` option.
Closes#8809
Previously something like
struct NotEq;
#[deriving(Eq)]
struct Error {
foo: NotEq
}
would just point to the `foo` field, with no mention of the
`deriving(Eq)`. With this patch, the compiler creates a note saying "in
expansion of #[deriving(Eq)]" pointing to the Eq.
(includes some cleanup/preparation; the commit view might be nicer, to filter out the noise of the first one.)
In order to keep up to date with changes to the libraries that `llvm-config`
spits out, the dependencies to the LLVM are a dynamically generated rust file.
This file is now automatically updated whenever LLVM is updated to get kept
up-to-date.
At the same time, this cleans out some old cruft which isn't necessary in the
makefiles in terms of dependencies.
Closes#10745Closes#10744
This upgrades LLVM in order to make progress on #10708, and it's also been awhile since we last upgraded!
The contentious point of this upgrade is that all JIT support has been removed because LLVM is changing it and we're not keeping up with it.
The main one removed is rust_upcall_reset_stack_limit (continuation of #10156),
and this also removes the upcall_trace function. The was hidden behind a
`-Z trace` flag, but if you attempt to use this now you'll get a linker error
because there is no implementation of the 'upcall_trace' function. Due to this
no longer working, I decided to remove it entirely from the compiler (I'm also a
little unsure on what it did in the first place).
Make trait lifetime parameters early bound in static fn type. Reasoning for this change is (hopefully) explained well enough in the comment, so I'll not duplicate it here. Fixes#10391.
r? @pnkfelix
LLVM's JIT has been updated numerous times, and we haven't been tracking it at
all. The existing LLVM glue code no longer compiles, and the JIT isn't used for
anything currently.
This also rebases out the FixedStackSegment support which we have added to LLVM.
None of this is still in use by the compiler, and there's no need to keep this
functionality around inside of LLVM.
This is needed to unblock #10708 (where we're tripping an LLVM assertion).
This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
This registers new snapshots after the landing of #10528, and then goes on to tweak the build process to build a monolithic `rustc` binary for use in future snapshots. This mainly involved dropping the dynamic dependency on `librustllvm`, so that's now built as a static library (with a dynamically generated rust file listing LLVM dependencies).
This currently doesn't actually make the snapshot any smaller (24MB => 23MB), but I noticed that the executable has 11MB of metadata so once progress is made on #10740 we should have a much smaller snapshot.
There's not really a super-compelling reason to distribute just a binary because we have all the infrastructure for dealing with a directory structure, but to me it seems "more correct" that a snapshot compiler is just a `rustc` binary.
I've renamed `MutableVector::mut_split(at)` to `MutableVector::mut_split_at(at)` to be coherent with ImmutableVector. As specified in the commit log, The `size_hint` method is not optimal because of #9629.
* Don't flag any address_insignificant statics as reachable because the whole
point of the address_insignificant optimization is that the static is not
reachable. Additionally, there's no need for it to be reachable because LLVM
optimizes it away.
* Be sure to not leak external node ids into our reachable set, this can
spuriously cause local items to be considered reachable if the node ids just
happen to line up
This is inspired by a mystifying linker failure when using `pkg-config` to
generate the linker args: `pkg-config` produces output that ends in a
space, thus resulting in an empty linker argument.
Also added some updates to the concerning error messages that helped
spotting this bug.
**Note**: I only tested on top of my #10670 PR, size reductions come from both change sets.
With this, [more enums are shrinked](https://gist.github.com/eddyb/08fef0dfc6ff54e890bc), the most significant one being `ast_node`, from 104 bytes (master) to 96 (#10670) and now to 32 bytes.
My own testcase requires **200MB** less when compiling (not including the other **200MB** gained in #10670), and rustc-stage2 is down by about **130MB**.
I believe there is more to gain by fiddling with the enums' layouts.
This adds support to link to OSX frameworks via the new link attribute when
using `kind = "framework"`. It is a compiler error to request linkage to a
framework when the target is not macos because other platforms don't support
frameworks.
Closes#2023
In this series of commits, I've implemented static linking for rust. The scheme I implemented was the same as my [mailing list post](https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html).
The commits have more details to the nitty gritty of what went on. I've rebased this on top of my native mutex pull request (#10479), but I imagine that it will land before this lands, I just wanted to pre-emptively get all the rebase conflicts out of the way (becuase this is reorganizing building librustrt as well).
Some contentious points I want to make sure are all good:
* I've added more "compiler chooses a default" behavior than I would like, I want to make sure that this is all very clearly outlined in the code, and if not I would like to remove behavior or make it clearer.
* I want to make sure that the new "fancy suite" tests are ok (using make/python instead of another rust crate)
If we do indeed pursue this, I would be more than willing to write up a document describing how linking in rust works. I believe that this behavior should be very understandable, and the compiler should never hinder someone just because linking is a little fuzzy.
In #10422, I didn't actually test to make sure that the '-Z gen-crate-map'
option was usable before I implemented it. The crate map was indeed generated
when '-Z gen-crate-map' was specified, but the I/O factory slot was empty
because of an extra check in trans about filling in that location.
This commit both fixes that location, and checks in a "fancy test" which does
lots of fun stuff. The test will use the rustc library to compile a rust crate,
and then compile a C program to link against that crate and run the C program.
To my knowledge this is the first test of its kind, so it's a little ad-hoc, but
it seems to get the job done. We could perhaps generalize running tests like
this, but for now I think it's fine to have this sort of functionality tucked
away in a test.
This commit implements the support necessary for generating both intermediate
and result static rust libraries. This is an implementation of my thoughts in
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html.
When compiling a library, we still retain the "lib" option, although now there
are "rlib", "staticlib", and "dylib" as options for crate_type (and these are
stackable). The idea of "lib" is to generate the "compiler default" instead of
having too choose (although all are interchangeable). For now I have left the
"complier default" to be a dynamic library for size reasons.
Of the rust libraries, lib{std,extra,rustuv} will bootstrap with an
rlib/dylib pair, but lib{rustc,syntax,rustdoc,rustpkg} will only be built as a
dynamic object. I chose this for size reasons, but also because you're probably
not going to be embedding the rustc compiler anywhere any time soon.
Other than the options outlined above, there are a few defaults/preferences that
are now opinionated in the compiler:
* If both a .dylib and .rlib are found for a rust library, the compiler will
prefer the .rlib variant. This is overridable via the -Z prefer-dynamic option
* If generating a "lib", the compiler will generate a dynamic library. This is
overridable by explicitly saying what flavor you'd like (rlib, staticlib,
dylib).
* If no options are passed to the command line, and no crate_type is found in
the destination crate, then an executable is generated
With this change, you can successfully build a rust program with 0 dynamic
dependencies on rust libraries. There is still a dynamic dependency on
librustrt, but I plan on removing that in a subsequent commit.
This change includes no tests just yet. Our current testing
infrastructure/harnesses aren't very amenable to doing flavorful things with
linking, so I'm planning on adding a new mode of testing which I believe belongs
as a separate commit.
Closes#552
- Removed module reexport workaround for the integer module macros
- Removed legacy reexports of `cmp::{min, max}` in the integer module macros
- Combined a few macros in `vec` into one
- Documented a few issues
While tracking down how this function became dead, identified a spot
(@fn cannot happen) where we probably would prefer to ICE rather than
pass silently; so added fail! invocation.
Instead of forcibly always aborting compilation, allow usage of
#[warn(unknown_features)] and related lint attributes to selectively abort
compilation. By default, this lint is deny.
Instead of forcibly always aborting compilation, allow usage of
#[warn(unknown_features)] and related lint attributes to selectively abort
compilation. By default, this lint is deny.
### Rationale
There is no reason to support more than 2³² nodes or names at this moment, as compiling something that big (even without considering the quadratic space usage of some analysis passes) would take at least **64GB**.
Meanwhile, some can't (or barely can) compile rustc because it requires almost **1.5GB**.
### Potential problems
Can someone confirm this doesn't affect metadata (de)serialization? I can't tell myself, I know nothing about it.
### Results
Some structures have a size reduction of 25% to 50%: [before](https://gist.github.com/luqmana/3a82a51fa9c86d9191fa) - [after](https://gist.github.com/eddyb/5a75f8973d3d8018afd3).
Sadly, there isn't a massive change in the memory used for compiling stage2 librustc (it doesn't go over **1.4GB** as [before](http://huonw.github.io/isrustfastyet/mem/), but I can barely see the difference).
However, my own testcase (previously peaking at **1.6GB** in typeck) shows a reduction of **200**-**400MB**.
The majority of this change is modifying some of the `ast_visit` methods to return multiple values.
It's prohibitively expensive to allocate a `~[Foo]` every time a statement, declaration, item, etc is visited, especially since the vast majority will have 0 or 1 elements. I've added a `SmallVector` class that avoids allocation in the 0 and 1 element cases to take care of that.
This patchset makes warning if crate-level attribute is used at other places, obsolete attributed is used, or unknown attribute is used, since they are usually from mistakes.
Closes#3348
This is needed so that the FFI works as expected on platforms that don't
flatten aggregates the way the AMD64 ABI does, especially for `#[repr(C)]`.
This moves more of `type_of` into `trans::adt`, because the type might
or might not be an LLVM struct.
Closes#10308.
This is needed so that the FFI works as expected on platforms that don't
flatten aggregates the way the AMD64 ABI does, especially for `#[repr(C)]`.
This moves more of `type_of` into `trans::adt`, because the type might
or might not be an LLVM struct.
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
Issue #8763 is about improving a particular error message.
* added case & better error message for "impl trait for module"
* added compile-fail test trait-impl-for-module.rs
* updated copyright dates
* revised compile-fail test trait-or-new-type-instead
(the error message for the modified test is still unclear, but that's a different bug https://github.com/mozilla/rust/issues/8767)
* added case & better error message for "impl trait for module"
* used better way to print the module
* switched from //error-pattern to //~ ERROR
* added compile-fail test trait-impl-for-module.rs
* revised compile-fail test trait-or-new-type-instead
(the error message for the modified test is still unclear, but that's a different bug)
* added FIXME to trait-or-new-type-instead
With these changes I was able to cross compile for windows from a linux box. (Using the mingw-w64 package on Debian Testing).
Fixed a bug where the `target_family` cfg would be wrong when targeting something with a different value than the host. (i.e windows -> unix or unix -> windows).
Also, removed `LIBUV_FLAGS` in `mk/rt.mk` because of the redundancy between it and `CFG_GCCISH_CFLAGS_(target)`.
After this we can create a snapshot and migrate to mingw64 instead of mingw32.
I added a test case which does not compile today, and required changes on
privacy's side of things to get right. Additionally, this moves a good bit of
logic which did not belong in reachability into privacy.
All of reachability should solely be responsible for determining what the
reachable surface area of a crate is given the exported surface area (where the
exported surface area is that which is usable by external crates).
Privacy will now correctly figure out what's exported by deeply looking
through reexports. Previously if a module were reexported under another name,
nothing in the module would actually get exported in the executable. I also
consolidated the phases of privacy to be clearer about what's an input to what.
The privacy checking pass no longer uses the notion of an "all public" path, and
the embargo visitor is no longer an input to the checking pass.
Currently the embargo visitor is built as a saturating analysis because it's
unknown what portions of the AST are going to get re-exported.
This also cracks down on exported methods from impl blocks and trait blocks. If you implement a private trait, none of the symbols are exported, and if you have an impl for a private type none of the symbols are exported either. On the other hand, if you implement a public trait for a private type, the symbols are still exported. I'm unclear on whether this last part is correct, but librustc will fail to link unless it's in place.
I added a test case which does not compile today, and required changes on
privacy's side of things to get right. Additionally, this moves a good bit of
logic which did not belong in reachability into privacy.
All of reachability should solely be responsible for determining what the
reachable surface area of a crate is given the exported surface area (where the
exported surface area is that which is usable by external crates).
Privacy will now correctly figure out what's exported by deeply looking
through reexports. Previously if a module were reexported under another name,
nothing in the module would actually get exported in the executable. I also
consolidated the phases of privacy to be clearer about what's an input to what.
The privacy checking pass no longer uses the notion of an "all public" path, and
the embargo visitor is no longer an input to the checking pass.
Currently the embargo visitor is built as a saturating analysis because it's
unknown what portions of the AST are going to get re-exported.
This was needed to access UEFI boot services in my new Boot2Rust experiment.
I also realized that Rust functions declared as extern always use the C calling convention regardless of how they were declared, so this pull request fixes that as well.
This replaces `*` with `..` in enums, `_` with `..` in structs, and `.._` with `..` in vectors. It adds obsolete syntax warnings for the old forms but doesn't turn them on yet because we need a snapshot.
#5830
This PR improves the single-stepping experience for if-expression (no more jumping into the *else* branch before entering the *then* branch, no more jumping to the end of the *else* branch after finishing the *then* branch). Unfortunately I don't know of a straight-forward way of writing automated tests for this. Suggestions welcome!
If a function is marked as external, then it's likely desired for use with some
native library, so we're not really accomplishing a whole lot by internalizing
all of these symbols.
If a function is marked as external, then it's likely desired for use with some
native library, so we're not really accomplishing a whole lot by internalizing
all of these symbols.
I've started working on this issue and pushed a small commit, which adds a range check for integer literals in `middle::const_eval` (no `uint` at the moment)
At the moment, this patch is just a proof of concept, I'm not sure if there is a better function for the checks in `middle::const_eval`. This patch does not check for overflows after constant folding, eg:
let x: i8 = 99 + 99;
Bare functions are another example of a scalar but non-numeric
type (like char) that should be handled separately in casts.
This disallows expressions like `0 as extern "Rust" fn() -> int;`.
It might be advantageous to allow casts between bare functions
and raw pointers in unsafe code in the future, to pass function
pointers between Rust and C.
Closes#8728
Now the privacy pass returns enough information that other passes do not need to duplicate the visibility rules, and the missing_doc implementation is more consistent with other lint checks.
Previously, the `exported_items` set created by the privacy pass was
incomplete. Specifically, it did not include items that had been defined
at a private path but then `pub use`d at a public path. This commit
finds all crate exports during the privacy pass. Consequently, some code
in the reachable pass and in rustdoc is no longer necessary. This commit
then removes the separate `MissingDocLintVisitor` lint pass, opting to
check missing_doc lint in the same pass as the other lint checkers using
the visibility result computed by the privacy pass.
Fixes#9777.
The UvProcess exit callback is called with a zero exit status and non-zero termination signal when a child is terminated by a signal.
If a parent checks only the exit status (for example, only checks the return value from `wait()`), it may believe the process completed successfully when it actually failed.
Helpers for common use-cases are in `std::rt::io::process`.
Should resolve https://github.com/mozilla/rust/issues/10062.
As we start to move runtime components into the crate map, it's becoming harder
and harder to start the runtime from a C function as rust is embedded in another
application. Right now if you compile a rust crate as a dynamic library which is
then linked to another application, when using std::rt::start there are no I/O
local services, even though rustuv was linked against and requested. The reason
for this is that there is no top level crate map available specifying where to
find libuv I/O.
This option is not meant to be used regularly, but rather whenever compiling a
final library crate and linking it into another application. This lifts the
requirement that to get a crate map you must have the final destination be an
executable.
Since the removal of privacy from resolve, this flag is no longer necessary to
get the test runner working. All of the privacy checks are bypassed by a special
item attribute in the privacy visitor.
Closes#4947
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
As we start to move runtime components into the crate map, it's becoming harder
and harder to start the runtime from a C function as rust is embedded in another
application. Right now if you compile a rust crate as a dynamic library which is
then linked to another application, when using std::rt::start there are no I/O
local services, even though rustuv was linked against and requested. The reason
for this is that there is no top level crate map available specifying where to
find libuv I/O.
This option is not meant to be used regularly, but rather whenever compiling a
final library crate and linking it into another application. This lifts the
requirement that to get a crate map you must have the final destination be an
executable.
This adds an other ABI option which allows a custom selection over the target
architecture and OS. The only current candidate for this change is that kernel32
on win32 uses stdcall, but on win64 it uses the cdecl calling convention.
Otherwise everywhere else this is defined as using the Cdecl calling convention.
cc #10049Closes#8774
This adds an other ABI option which allows a custom selection over the target
architecture and OS. The only current candidate for this change is that kernel32
on win32 uses stdcall, but on win64 it uses the cdecl calling convention.
Otherwise everywhere else this is defined as using the Cdecl calling convention.
cc #10049Closes#8774