Commits for details. Highlights:
- `flate` returns `CVec<u8>` to save reallocating a whole new `&[u8]`
- a lot of `transmute`s removed outright or replaced with `as` (etc.)
This commit removes deriving(ToStr) in favor of deriving(Show), migrating all impls of ToStr to fmt::Show.
Most of the details can be found in the first commit message.
Closes#12477
The std::run module is a relic from a standard library long since past, and
there's not much use to having two modules to execute processes with where one
is slightly more convenient. This commit merges the two modules, moving lots of
functionality from std::run into std::io::process and then deleting
std::run.
New things you can find in std::io::process are:
* Process::new() now only takes prog/args
* Process::configure() takes a ProcessConfig
* Process::status() is the same as run::process_status
* Process::output() is the same as run::process_output
* I/O for spawned tasks is now defaulted to captured in pipes instead of ignored
* Process::kill() was added (plus an associated green/native implementation)
* Process::wait_with_output() is the same as the old finish_with_output()
* destroy() is now signal_exit()
* force_destroy() is now signal_kill()
Closes#2625Closes#10016
This commit changes the ToStr trait to:
impl<T: fmt::Show> ToStr for T {
fn to_str(&self) -> ~str { format!("{}", *self) }
}
The ToStr trait has been on the chopping block for quite awhile now, and this is
the final nail in its coffin. The trait and the corresponding method are not
being removed as part of this commit, but rather any implementations of the
`ToStr` trait are being forbidden because of the generic impl. The new way to
get the `to_str()` method to work is to implement `fmt::Show`.
Formatting into a `&mut Writer` (as `format!` does) is much more efficient than
`ToStr` when building up large strings. The `ToStr` trait forces many
intermediate allocations to be made while the `fmt::Show` trait allows
incremental buildup in the same heap allocated buffer. Additionally, the
`fmt::Show` trait is much more extensible in terms of interoperation with other
`Writer` instances and in more situations. By design the `ToStr` trait requires
at least one allocation whereas the `fmt::Show` trait does not require any
allocations.
Closes#8242Closes#9806
The details can be found in the comments I added to the test, but the gist of it
is that capturing output injects rescheduling a green task on failure, which
wasn't desired for the test in question.
cc #12340
- adds a `LockGuard` type returned by `.lock` and `.trylock` that unlocks the mutex in the destructor
- renames `mutex::Mutex` to `StaticNativeMutex`
- adds a `NativeMutex` type with a destructor
- removes `LittleLock`
- adds `#[must_use]` to `sync::mutex::Guard` to remind people to use it
Any single-threaded task benchmark will spend a good chunk of time in `kqueue()` on osx and `epoll()` on linux, and the reason for this is that each time a task is terminated it will hit the syscall. When a task terminates, it context switches back to the scheduler thread, and the scheduler thread falls out of `run_sched_once` whenever it figures out that it did some work.
If we know that `epoll()` will return nothing, then we can continue to do work locally (only while there's work to be done). We must fall back to `epoll()` whenever there's active I/O in order to check whether it's ready or not, but without that (which is largely the case in benchmarks), we can prevent the costly syscall and can get a nice speedup.
I've separated the commits into preparation for this change and then the change itself, the last commit message has more details.
The green scheduler can optimize its runtime based on this by deciding to not go
to sleep in epoll() if there is no active I/O and there is a task to be stolen.
This is implemented for librustuv by keeping a count of the number of tasks
which are currently homed. If a task is homed, and then performs a blocking I/O
operation, the count will be nonzero while the task is blocked. The homing count
is intentionally 0 when there are I/O handles, but no handles currently blocked.
The reason for this is that epoll() would only be used to wake up the scheduler
anyway.
The crux of this change was to have a `HomingMissile` contain a mutable borrowed
reference back to the `HomeHandle`. The rest of the change was just dealing with
this fallout. This reference is used to decrement the homed handle count in a
HomingMissile's destructor.
Also note that the count maintained is not atomic because all of its
increments/decrements/reads are all on the same I/O thread.
This, the Nth rewrite of channels, is not a rewrite of the core logic behind
channels, but rather their API usage. In the past, we had the distinction
between oneshot, stream, and shared channels, but the most recent rewrite
dropped oneshots in favor of streams and shared channels.
This distinction of stream vs shared has shown that it's not quite what we'd
like either, and this moves the `std::comm` module in the direction of "one
channel to rule them all". There now remains only one Chan and one Port.
This new channel is actually a hybrid oneshot/stream/shared channel under the
hood in order to optimize for the use cases in question. Additionally, this also
reduces the cognitive burden of having to choose between a Chan or a SharedChan
in an API.
My simple benchmarks show no reduction in efficiency over the existing channels
today, and a 3x improvement in the oneshot case. I sadly don't have a
pre-last-rewrite compiler to test out the old old oneshots, but I would imagine
that the performance is comparable, but slightly slower (due to atomic reference
counting).
This commit also brings the bonus bugfix to channels that the pending queue of
messages are all dropped when a Port disappears rather then when both the Port
and the Chan disappear.
Beforehand, using a concurrent queue always mandated that the "shared state" be
stored internally to the queues in order to provide a safe interface. This isn't
quite as flexible as one would want in some circumstances, so instead this
commit moves the queues to not containing the shared state.
The queues no longer have a "default useful safe" interface, but rather a
"default safe" interface (minus the useful part). The queues have to be shared
manually through an Arc or some other means. This allows them to be a little
more flexible at the cost of a usability hindrance.
I plan on using this new flexibility to upgrade a channel to a shared channel
seamlessly.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
The general consensus is that we want to move away from conditions for I/O, and I propose a two-step plan for doing so:
1. Warn about unused `Result` types. When all of I/O returns `Result`, it will require you inspect the return value for an error *only if* you have a result you want to look at. By default, for things like `write` returning `Result<(), Error>`, these will all go silently ignored. This lint will prevent blind ignorance of these return values, letting you know that there's something you should do about them.
2. Implement a `try!` macro:
```
macro_rules! try( ($e:expr) => (match $e { Ok(e) => e, Err(e) => return Err(e) }) )
```
With these two tools combined, I feel that we get almost all the benefits of conditions. The first step (the lint) is a sanity check that you're not ignoring return values at callsites. The second step is to provide a convenience method of returning early out of a sequence of computations. After thinking about this for awhile, I don't think that we need the so-called "do-notation" in the compiler itself because I think it's just *too* specialized. Additionally, the `try!` macro is super lightweight, easy to understand, and works almost everywhere. As soon as you want to do something more fancy, my answer is "use match".
Basically, with these two tools in action, I would be comfortable removing conditions. What do others think about this strategy?
----
This PR specifically implements the `unused_result` lint. I actually added two lints, `unused_result` and `unused_must_use`, and the first commit has the rationale for why `unused_result` is turned off by default.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
Right now on linux, an empty executable with LTO still depends on librt becaues
of the clock_gettime function in rust_builtin.o, but this commit moves this
dependency into a rust function which is subject to elimination via LTO.
At the same time, this also drops libstd's dependency on librt on unices that
are not OSX because the library is only used by extra::time (and now the
dependency is listed in that module instead).
* vec::raw::to_ptr is gone
* Pausible => Pausable
* Removing @
* Calling the main task "<main>"
* Removing unused imports
* Removing unused mut
* Bringing some libextra tests up to date
* Allowing compiletest to work at stage0
* Fixing the bootstrap-from-c rmake tests
* assert => rtassert in a few cases
* printing to stderr instead of stdout in fail!()