* vec::raw::to_ptr is gone
* Pausible => Pausable
* Removing @
* Calling the main task "<main>"
* Removing unused imports
* Removing unused mut
* Bringing some libextra tests up to date
* Allowing compiletest to work at stage0
* Fixing the bootstrap-from-c rmake tests
* assert => rtassert in a few cases
* printing to stderr instead of stdout in fail!()
These functions are all unnecessary now, and they only have meaning in the M:N
context. Removing these functions uncovered a bug in the librustuv timer
bindings, but it was fairly easy to cover (and the test is already committed).
These cannot be completely removed just yet due to their usage in the WaitQueue
of extra::sync, and until the mutex in libextra is rewritten it will not be
possible to remove the deferred sends for channels.
The uv loop was being destroyed before the async handle was being destroyed, so
closing the async handle was causing a use-after-free in the uv loop. This was
fixed by moving destruction of the queue's async handle to an earlier location
and then actually freeing it once the loop has been dropped.
The queue has an async handle which must be destroyed before the loop is
destroyed, so use a little bit of an option dance to get around this
requirement.
Note that this removes a number of run-pass tests which are exercising behavior
of the old runtime. This functionality no longer exists and is thoroughly tested
inside of libgreen and libnative. There isn't really the notion of "starting the
runtime" any more. The major notion now is "bootstrapping the initial task".
This allows creation of different sched pools with different io factories.
Namely, this will be used to test the basic I/O loop in the green crate. This
can also be used to override the global default.
Use the previous commit's new scheduler pool abstraction in libgreen to write
some homing tests which force an I/O handle to be homed from one event loop to
another.
All tests except for the homing tests are now working again with the
librustuv/libgreen refactoring. The homing-related tests are currently commented
out and now placed in the rustuv::homing module.
I plan on refactoring scheduler pool spawning in order to enable more homing
tests in a future commit.
This reimplements librustuv without using the interfaces provided by the
scheduler in libstd. This solely uses the new Runtime trait in order to
interface with the local task and perform the necessary scheduling operations.
The largest snag in this refactoring is reimplementing homing. The new runtime
trait exposes no concept of "homing" a task or forcibly sending a task to a
remote scheduler (there is no concept of a scheduler). In order to reimplement
homing, the transferrence of tasks is now done at the librustuv level instead of
the scheduler level. This means that all I/O loops now have a concurrent queue
which receives homing messages and requests.
This allows the entire implementation of librustuv to be only dependent on the
runtime trait, severing all dependence of librustuv on the scheduler and related
green-thread functions.
This is all in preparation of the introduction of libgreen and libnative.
At the same time, I also took the liberty of removing all glob imports from
librustuv.
Understand 'pkgid' in stage0. As a bonus, the snapshot now contains now metadata
(now that those changes have landed), and the snapshot download is half as large
as it used to be!
This replaces the link meta attributes with a pkgid attribute and uses a hash
of this as the crate hash. This makes the crate hash computable by things
other than the Rust compiler. It also switches the hash function ot SHA1 since
that is much more likely to be available in shell, Python, etc than SipHash.
Fixes#10188, #8523.
This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
This commit implements the support necessary for generating both intermediate
and result static rust libraries. This is an implementation of my thoughts in
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html.
When compiling a library, we still retain the "lib" option, although now there
are "rlib", "staticlib", and "dylib" as options for crate_type (and these are
stackable). The idea of "lib" is to generate the "compiler default" instead of
having too choose (although all are interchangeable). For now I have left the
"complier default" to be a dynamic library for size reasons.
Of the rust libraries, lib{std,extra,rustuv} will bootstrap with an
rlib/dylib pair, but lib{rustc,syntax,rustdoc,rustpkg} will only be built as a
dynamic object. I chose this for size reasons, but also because you're probably
not going to be embedding the rustc compiler anywhere any time soon.
Other than the options outlined above, there are a few defaults/preferences that
are now opinionated in the compiler:
* If both a .dylib and .rlib are found for a rust library, the compiler will
prefer the .rlib variant. This is overridable via the -Z prefer-dynamic option
* If generating a "lib", the compiler will generate a dynamic library. This is
overridable by explicitly saying what flavor you'd like (rlib, staticlib,
dylib).
* If no options are passed to the command line, and no crate_type is found in
the destination crate, then an executable is generated
With this change, you can successfully build a rust program with 0 dynamic
dependencies on rust libraries. There is still a dynamic dependency on
librustrt, but I plan on removing that in a subsequent commit.
This change includes no tests just yet. Our current testing
infrastructure/harnesses aren't very amenable to doing flavorful things with
linking, so I'm planning on adding a new mode of testing which I believe belongs
as a separate commit.
Closes#552
This adds an implementation of the Chase-Lev work-stealing deque to libstd
under std::rt::deque. I've been unable to break the implementation of the deque
itself, and it's not super highly optimized just yet (everything uses a SeqCst
memory ordering).
The major snag in implementing the chase-lev deque is that the buffers used to
store data internally cannot get deallocated back to the OS. In the meantime, a
shared buffer pool (synchronized by a normal mutex) is used to
deallocate/allocate buffers from. This is done in hope of not overcommitting too
much memory. It is in theory possible to eventually free the buffers, but one
must be very careful in doing so.
I was unable to get some good numbers from src/test/bench tests (I don't think
many of them are slamming the work queue that much), but I was able to get some
good numbers from one of my own tests. In a recent rewrite of select::select(),
I found that my implementation was incredibly slow due to contention on the
shared work queue. Upon switching to the parallel deque, I saw the contention
drop to 0 and the runtime go from 1.6s to 0.9s with the most amount of time
spent in libuv awakening the schedulers (plus allocations).
Closes#4877