This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
This commit implements the support necessary for generating both intermediate
and result static rust libraries. This is an implementation of my thoughts in
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html.
When compiling a library, we still retain the "lib" option, although now there
are "rlib", "staticlib", and "dylib" as options for crate_type (and these are
stackable). The idea of "lib" is to generate the "compiler default" instead of
having too choose (although all are interchangeable). For now I have left the
"complier default" to be a dynamic library for size reasons.
Of the rust libraries, lib{std,extra,rustuv} will bootstrap with an
rlib/dylib pair, but lib{rustc,syntax,rustdoc,rustpkg} will only be built as a
dynamic object. I chose this for size reasons, but also because you're probably
not going to be embedding the rustc compiler anywhere any time soon.
Other than the options outlined above, there are a few defaults/preferences that
are now opinionated in the compiler:
* If both a .dylib and .rlib are found for a rust library, the compiler will
prefer the .rlib variant. This is overridable via the -Z prefer-dynamic option
* If generating a "lib", the compiler will generate a dynamic library. This is
overridable by explicitly saying what flavor you'd like (rlib, staticlib,
dylib).
* If no options are passed to the command line, and no crate_type is found in
the destination crate, then an executable is generated
With this change, you can successfully build a rust program with 0 dynamic
dependencies on rust libraries. There is still a dynamic dependency on
librustrt, but I plan on removing that in a subsequent commit.
This change includes no tests just yet. Our current testing
infrastructure/harnesses aren't very amenable to doing flavorful things with
linking, so I'm planning on adding a new mode of testing which I believe belongs
as a separate commit.
Closes#552
This adds an implementation of the Chase-Lev work-stealing deque to libstd
under std::rt::deque. I've been unable to break the implementation of the deque
itself, and it's not super highly optimized just yet (everything uses a SeqCst
memory ordering).
The major snag in implementing the chase-lev deque is that the buffers used to
store data internally cannot get deallocated back to the OS. In the meantime, a
shared buffer pool (synchronized by a normal mutex) is used to
deallocate/allocate buffers from. This is done in hope of not overcommitting too
much memory. It is in theory possible to eventually free the buffers, but one
must be very careful in doing so.
I was unable to get some good numbers from src/test/bench tests (I don't think
many of them are slamming the work queue that much), but I was able to get some
good numbers from one of my own tests. In a recent rewrite of select::select(),
I found that my implementation was incredibly slow due to contention on the
shared work queue. Upon switching to the parallel deque, I saw the contention
drop to 0 and the runtime go from 1.6s to 0.9s with the most amount of time
spent in libuv awakening the schedulers (plus allocations).
Closes#4877
It turns out that libuv was returning ENOSPC to us in our usage of the
uv_ipX_name functions. It also turns out that there may be an off-by-one in
libuv. For now just add one to the buffer size and handle the return value
correctly.
Closes#10663
This is a behavioral difference in libuv between different platforms in
different situations. It turns out that libuv on windows will immediately
allocate a buffer instead of waiting for data to be ready. What this implies is
that we must have our custom data set on the handle before we call
uv_read_start.
I wish I knew of a way to test this, but this relies to being on the windows
platform *and* reading from a true TTY handle which only happens when this is
actually attached to a terminal. I have manually verified this works.
Closes#10645
This is a behavioral difference in libuv between different platforms in
different situations. It turns out that libuv on windows will immediately
allocate a buffer instead of waiting for data to be ready. What this implies is
that we must have our custom data set on the handle before we call
uv_read_start.
I wish I knew of a way to test this, but this relies to being on the windows
platform *and* reading from a true TTY handle which only happens when this is
actually attached to a terminal. I have manually verified this works.
Closes#10645
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
There are issues with reading stdin when it is actually attached to a pipe, but
I have run into no problems in writing to stdout/stderr when they are attached
to pipes.
There are issues with reading stdin when it is actually attached to a pipe, but
I have run into no problems in writing to stdout/stderr when they are attached
to pipes.
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
When a channel is destroyed, it may attempt scheduler operations which could
move a task off of it's I/O scheduler. This is obviously a bad interaction, and
some finesse is required to make it work (making destructors run at the right
time).
Closes#10375
It appears that uv's support for interacting with a stdio stream as a tty when
it's actually a pipe is pretty problematic. To get around this, promote a check
to see if the stream is a tty to the top of the tty constructor, and bail out
quickly if it's not identified as a tty.
Closes#10237
In the ideal world, uv I/O could be canceled safely at any time. In reality,
however, we are unable to do this. Right now linked failure is fairly flaky as
implemented in the runtime, making it very difficult to test whether the linked
failure mechanisms inside of the uv bindings are ready for this kind of
interaction.
Right now, all constructors will execute in a task::unkillable block, and all
homing I/O operations will prevent linked failure in the duration of the homing
operation. What this means is that tasks which perform I/O are still susceptible
to linked failure, but the I/O operations themselves will never get interrupted.
Instead, the linked failure will be received at the edge of the I/O operation.
It turns out that the uv implementation would cause use-after-free if the idle
callback was used after the call to `close`, and additionally nothing would ever
really work that well if `start()` were called twice. To change this, the
`start` and `close` methods were removed in favor of specifying the callback at
creation, and allowing destruction to take care of closing the watcher.