The libuv fs wrappers are very thin wrappers around the syscalls they correspond
to, and a notable worrisome case is the write syscall. This syscall is not
guaranteed to write the entire buffer provided, so we may have to continue
calling uv_fs_write if a short write occurs.
Closes#13130
The libuv fs wrappers are very thin wrappers around the syscalls they correspond
to, and a notable worrisome case is the write syscall. This syscall is not
guaranteed to write the entire buffer provided, so we may have to continue
calling uv_fs_write if a short write occurs.
Closes#13130
This is the final nail in the coffin for the crate map. The `start` function for
libgreen now has a new added parameter which is the event loop factory instead
of inferring it from the crate map. The two current valid values for this
parameter are `green::basic::event_loop` and `rustuv::event_loop`.
The compiler will no longer inject libgreen as the default runtime for rust
programs, this commit switches it over to libnative by default. Now that
libnative has baked for some time, it is ready enough to start getting more
serious usage as the default runtime for rustc generated binaries.
We've found that there isn't really a correct decision in choosing a 1:1 or M:N
runtime as a default for all applications, but it seems that a larger number of
programs today would work more reasonable with a native default rather than a
green default.
With this commit come a number of bugfixes:
* The main native task is now named "<main>"
* The main native task has the stack bounds set up properly
* #[no_uv] was renamed to #[no_start]
* The core-run-destroy test was rewritten for both libnative and libgreen and
one of the tests was modified to be more robust.
* The process-detach test was locked to libgreen because it uses signal handling
lint: add lint for use of a `~[T]`.
This is useless at the moment (since pretty much every crate uses
`~[]`), but should help avoid regressions once completely removed from a
crate.
This is something that is plausibly useful, and is provided by libuv. This is
not currently surfaced as part of the `TcpStream` type, but it may possibly
appear in the future. For now only the raw functionality is provided through the
Rtio objects.
This is something that is plausibly useful, and is provided by libuv. This is
not currently surfaced as part of the `TcpStream` type, but it may possibly
appear in the future. For now only the raw functionality is provided through the
Rtio objects.
If a TTY fails to get initialized, it still needs to have uv_close invoked on
it. This fixes the problem by constructing the TtyWatcher struct before the call
to uv_tty_init. The struct has a destructor on it which will close the handle
properly.
Closes#12666
There's a lot of these types in the compiler libraries, and a few of the
older or private stdlib ones. Some types are obviously meant to be
public, others not so much.
Commits for details. Highlights:
- `flate` returns `CVec<u8>` to save reallocating a whole new `&[u8]`
- a lot of `transmute`s removed outright or replaced with `as` (etc.)
This commit removes deriving(ToStr) in favor of deriving(Show), migrating all impls of ToStr to fmt::Show.
Most of the details can be found in the first commit message.
Closes#12477
The std::run module is a relic from a standard library long since past, and
there's not much use to having two modules to execute processes with where one
is slightly more convenient. This commit merges the two modules, moving lots of
functionality from std::run into std::io::process and then deleting
std::run.
New things you can find in std::io::process are:
* Process::new() now only takes prog/args
* Process::configure() takes a ProcessConfig
* Process::status() is the same as run::process_status
* Process::output() is the same as run::process_output
* I/O for spawned tasks is now defaulted to captured in pipes instead of ignored
* Process::kill() was added (plus an associated green/native implementation)
* Process::wait_with_output() is the same as the old finish_with_output()
* destroy() is now signal_exit()
* force_destroy() is now signal_kill()
Closes#2625Closes#10016
This commit changes the ToStr trait to:
impl<T: fmt::Show> ToStr for T {
fn to_str(&self) -> ~str { format!("{}", *self) }
}
The ToStr trait has been on the chopping block for quite awhile now, and this is
the final nail in its coffin. The trait and the corresponding method are not
being removed as part of this commit, but rather any implementations of the
`ToStr` trait are being forbidden because of the generic impl. The new way to
get the `to_str()` method to work is to implement `fmt::Show`.
Formatting into a `&mut Writer` (as `format!` does) is much more efficient than
`ToStr` when building up large strings. The `ToStr` trait forces many
intermediate allocations to be made while the `fmt::Show` trait allows
incremental buildup in the same heap allocated buffer. Additionally, the
`fmt::Show` trait is much more extensible in terms of interoperation with other
`Writer` instances and in more situations. By design the `ToStr` trait requires
at least one allocation whereas the `fmt::Show` trait does not require any
allocations.
Closes#8242Closes#9806
The details can be found in the comments I added to the test, but the gist of it
is that capturing output injects rescheduling a green task on failure, which
wasn't desired for the test in question.
cc #12340
- adds a `LockGuard` type returned by `.lock` and `.trylock` that unlocks the mutex in the destructor
- renames `mutex::Mutex` to `StaticNativeMutex`
- adds a `NativeMutex` type with a destructor
- removes `LittleLock`
- adds `#[must_use]` to `sync::mutex::Guard` to remind people to use it
Any single-threaded task benchmark will spend a good chunk of time in `kqueue()` on osx and `epoll()` on linux, and the reason for this is that each time a task is terminated it will hit the syscall. When a task terminates, it context switches back to the scheduler thread, and the scheduler thread falls out of `run_sched_once` whenever it figures out that it did some work.
If we know that `epoll()` will return nothing, then we can continue to do work locally (only while there's work to be done). We must fall back to `epoll()` whenever there's active I/O in order to check whether it's ready or not, but without that (which is largely the case in benchmarks), we can prevent the costly syscall and can get a nice speedup.
I've separated the commits into preparation for this change and then the change itself, the last commit message has more details.
The green scheduler can optimize its runtime based on this by deciding to not go
to sleep in epoll() if there is no active I/O and there is a task to be stolen.
This is implemented for librustuv by keeping a count of the number of tasks
which are currently homed. If a task is homed, and then performs a blocking I/O
operation, the count will be nonzero while the task is blocked. The homing count
is intentionally 0 when there are I/O handles, but no handles currently blocked.
The reason for this is that epoll() would only be used to wake up the scheduler
anyway.
The crux of this change was to have a `HomingMissile` contain a mutable borrowed
reference back to the `HomeHandle`. The rest of the change was just dealing with
this fallout. This reference is used to decrement the homed handle count in a
HomingMissile's destructor.
Also note that the count maintained is not atomic because all of its
increments/decrements/reads are all on the same I/O thread.
This, the Nth rewrite of channels, is not a rewrite of the core logic behind
channels, but rather their API usage. In the past, we had the distinction
between oneshot, stream, and shared channels, but the most recent rewrite
dropped oneshots in favor of streams and shared channels.
This distinction of stream vs shared has shown that it's not quite what we'd
like either, and this moves the `std::comm` module in the direction of "one
channel to rule them all". There now remains only one Chan and one Port.
This new channel is actually a hybrid oneshot/stream/shared channel under the
hood in order to optimize for the use cases in question. Additionally, this also
reduces the cognitive burden of having to choose between a Chan or a SharedChan
in an API.
My simple benchmarks show no reduction in efficiency over the existing channels
today, and a 3x improvement in the oneshot case. I sadly don't have a
pre-last-rewrite compiler to test out the old old oneshots, but I would imagine
that the performance is comparable, but slightly slower (due to atomic reference
counting).
This commit also brings the bonus bugfix to channels that the pending queue of
messages are all dropped when a Port disappears rather then when both the Port
and the Chan disappear.
Beforehand, using a concurrent queue always mandated that the "shared state" be
stored internally to the queues in order to provide a safe interface. This isn't
quite as flexible as one would want in some circumstances, so instead this
commit moves the queues to not containing the shared state.
The queues no longer have a "default useful safe" interface, but rather a
"default safe" interface (minus the useful part). The queues have to be shared
manually through an Arc or some other means. This allows them to be a little
more flexible at the cost of a usability hindrance.
I plan on using this new flexibility to upgrade a channel to a shared channel
seamlessly.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.