This weeds out a bunch of warnings building stdtest on windows, and it also adds
a check! macro to the io::fs tests to help diagnose errors that are cropping up
on windows platforms as well.
cc #12516
This patch series does a couple things:
* replaces manual `Hash` implementations with `#[deriving(Hash)]`
* adds `Hash` back to `std::prelude`
* minor cleanup of whitespace and variable names.
Turns out the `timeout` command was exiting immediately because it didn't like
its output piped. Instead use `ping` repeatedly to get a process that will sleep
for awhile.
cc #12516
These two tests are notoriously flaky on the windows bots right now, so I'm
ignoring them until I can investigate them some more. The truncate_works test
has been flaky for quite some time, but it has gotten much worse recently. The
test_exists test has been flaky since the recent std::run rewrite landed.
Finally, the "unix pipe" test failure is a recent discovery on the try bots. I
haven't seen this failing much, but better safe than sorry!
cc #12516
This commit removes deriving(ToStr) in favor of deriving(Show), migrating all impls of ToStr to fmt::Show.
Most of the details can be found in the first commit message.
Closes#12477
The std::run module is a relic from a standard library long since past, and
there's not much use to having two modules to execute processes with where one
is slightly more convenient. This commit merges the two modules, moving lots of
functionality from std::run into std::io::process and then deleting
std::run.
New things you can find in std::io::process are:
* Process::new() now only takes prog/args
* Process::configure() takes a ProcessConfig
* Process::status() is the same as run::process_status
* Process::output() is the same as run::process_output
* I/O for spawned tasks is now defaulted to captured in pipes instead of ignored
* Process::kill() was added (plus an associated green/native implementation)
* Process::wait_with_output() is the same as the old finish_with_output()
* destroy() is now signal_exit()
* force_destroy() is now signal_kill()
Closes#2625Closes#10016
The std::run module is a relic from a standard library long since past, and
there's not much use to having two modules to execute processes with where one
is slightly more convenient. This commit merges the two modules, moving lots of
functionality from std::run into std::io::process and then deleting
std::run.
New things you can find in std::io::process are:
* Process::new() now only takes prog/args
* Process::configure() takes a ProcessConfig
* Process::status() is the same as run::process_status
* Process::output() is the same as run::process_output
* I/O for spawned tasks is now defaulted to captured in pipes instead of ignored
* Process::kill() was added (plus an associated green/native implementation)
* Process::wait_with_output() is the same as the old finish_with_output()
* destroy() is now signal_exit()
* force_destroy() is now signal_kill()
Closes#2625Closes#10016
This commit changes the ToStr trait to:
impl<T: fmt::Show> ToStr for T {
fn to_str(&self) -> ~str { format!("{}", *self) }
}
The ToStr trait has been on the chopping block for quite awhile now, and this is
the final nail in its coffin. The trait and the corresponding method are not
being removed as part of this commit, but rather any implementations of the
`ToStr` trait are being forbidden because of the generic impl. The new way to
get the `to_str()` method to work is to implement `fmt::Show`.
Formatting into a `&mut Writer` (as `format!` does) is much more efficient than
`ToStr` when building up large strings. The `ToStr` trait forces many
intermediate allocations to be made while the `fmt::Show` trait allows
incremental buildup in the same heap allocated buffer. Additionally, the
`fmt::Show` trait is much more extensible in terms of interoperation with other
`Writer` instances and in more situations. By design the `ToStr` trait requires
at least one allocation whereas the `fmt::Show` trait does not require any
allocations.
Closes#8242Closes#9806
These two containers are indeed collections, so their place is in
libcollections, not in libstd. There will always be a hash map as part of the
standard distribution of Rust, but by moving it out of the standard library it
makes libstd that much more portable to more platforms and environments.
This conveniently also removes the stuttering of 'std::hashmap::HashMap',
although 'collections::HashMap' is only one character shorter.
One of the most common ways to use the stdin stream is to read it line by line
for a small program. In order to facilitate this common usage pattern, this
commit changes the stdin() function to return a BufferedReader by default. A new
`stdin_raw()` method was added to get access to the raw unbuffered stream.
I have not changed the stdout or stderr methods because they are currently
unable to flush in their destructor, but #12403 should have just fixed that.
This is in preparation to remove the implementations of ToStrRadix in integers, and to remove the associated logic from `std::num::strconv`.
The parts that still need to be liberated are:
- `std::fmt::Formatter::runplural`
- `num::{bigint, complex, rational}`
This "bubble up an error" macro was originally named if_ok! in order to get it
landed, but after the fact it was discovered that this name is not exactly
desirable.
The name `if_ok!` isn't immediately clear that is has much to do with error
handling, and it doesn't look fantastic in all contexts (if if_ok!(...) {}). In
general, the agreed opinion about `if_ok!` is that is came in as subpar.
The name `try!` is more invocative of error handling, it's shorter by 2 letters,
and it looks fitting in almost all circumstances. One concern about the word
`try!` is that it's too invocative of exceptions, but the belief is that this
will be overcome with documentation and examples.
Close#12037
One of the most common ways to use the stdin stream is to read it line by line
for a small program. In order to facilitate this common usage pattern, this
commit changes the stdin() function to return a BufferedReader by default. A new
`stdin_raw()` method was added to get access to the raw unbuffered stream.
I have not changed the stdout or stderr methods because they are currently
unable to flush in their destructor, but #12403 should have just fixed that.
* Implementation of pipe_win32 filled out for libnative
* Reorganize pipes to be clone-able
* Fix a few file descriptor leaks on error
* Factor out some common code into shared functions
* Make use of the if_ok!() macro for less indentation
Closes#11201
This is useful in contexts like this:
```rust
let size = rdr.read_be_i32() as uint;
let mut limit = LimitReader::new(rdr.by_ref(), size);
let thing = read_a_thing(&mut limit);
assert!(limit.limit() == 0);
```
This is useful in contexts like this:
let size = rdr.read_be_i32() as uint;
let mut limit = LimitReader::new(rdr.by_ref(), size);
let thing = read_a_thing(&mut limit);
assert!(limit.limit() == 0);
When tests fail, their stdout and stderr is printed as part of the summary, but
this helps suppress failure messages from #[should_fail] tests and generally
clean up the output of the test runner.
This adopts the rules posted in #10432:
1. If a seek position is negative, then an error is generated
2. Seeks beyond the end-of-file are allowed. Future writes will fill the gap
with data and future reads will return errors.
3. Seeks within the bounds of a file are fine.
Closes#10432
This adopts the rules posted in #10432:
1. If a seek position is negative, then an error is generated
2. Seeks beyond the end-of-file are allowed. Future writes will fill the gap
with data and future reads will return errors.
3. Seeks within the bounds of a file are fine.
Closes#10432
This, the Nth rewrite of channels, is not a rewrite of the core logic behind
channels, but rather their API usage. In the past, we had the distinction
between oneshot, stream, and shared channels, but the most recent rewrite
dropped oneshots in favor of streams and shared channels.
This distinction of stream vs shared has shown that it's not quite what we'd
like either, and this moves the `std::comm` module in the direction of "one
channel to rule them all". There now remains only one Chan and one Port.
This new channel is actually a hybrid oneshot/stream/shared channel under the
hood in order to optimize for the use cases in question. Additionally, this also
reduces the cognitive burden of having to choose between a Chan or a SharedChan
in an API.
My simple benchmarks show no reduction in efficiency over the existing channels
today, and a 3x improvement in the oneshot case. I sadly don't have a
pre-last-rewrite compiler to test out the old old oneshots, but I would imagine
that the performance is comparable, but slightly slower (due to atomic reference
counting).
This commit also brings the bonus bugfix to channels that the pending queue of
messages are all dropped when a Port disappears rather then when both the Port
and the Chan disappear.
This is a fairly trivial (but IMHO handy) change to implement IterBytes for IpAddr and SocketAddr.
I originally stumbled across this because I wanted to use a SocketAddr as a HashMap key and discovered that I couldn't do it directly. Had to impl IterBytes on a new intermediate type to work around it.
I have a hunch this just deadlocked the windows bots. Due to UDP being a lossy
protocol, I don't think we can guarantee that the server can receive both
packets, so just listen for one of them.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
* All I/O now returns IoResult<T> = Result<T, IoError>
* All formatting traits now return fmt::Result = IoResult<()>
* The if_ok!() macro was added to libstd
LLVM fails to properly optimize the shifts used to convert the source
value to the right endianess. The resulting assembly copies the value
to the stack one byte at a time even when there's no conversion required
(e.g. u64_to_le_bytes on a little endian machine).
Instead of doing the conversion ourselves using shifts, we can use the
existing intrinsics to perform the endianess conversion and then
transmute the value to get a fixed vector of its bytes.
Before:
test be_i8 ... bench: 21442 ns/iter (+/- 70)
test be_i16 ... bench: 21447 ns/iter (+/- 45)
test be_i32 ... bench: 23832 ns/iter (+/- 63)
test be_i64 ... bench: 26887 ns/iter (+/- 267)
test le_i8 ... bench: 21442 ns/iter (+/- 56)
test le_i16 ... bench: 21448 ns/iter (+/- 36)
test le_i32 ... bench: 23825 ns/iter (+/- 153)
test le_i64 ... bench: 26271 ns/iter (+/- 138)
After:
test be_i8 ... bench: 21438 ns/iter (+/- 10)
test be_i16 ... bench: 21441 ns/iter (+/- 15)
test be_i32 ... bench: 19057 ns/iter (+/- 6)
test be_i64 ... bench: 21439 ns/iter (+/- 34)
test le_i8 ... bench: 21438 ns/iter (+/- 19)
test le_i16 ... bench: 21439 ns/iter (+/- 8)
test le_i32 ... bench: 21439 ns/iter (+/- 19)
test le_i64 ... bench: 21438 ns/iter (+/- 22)
`Times::times` was always a second-class loop because it did not support the `break` and `continue` operations. Its playful appeal (which I liked) was then lost after `do` was disabled for closures. It's time to let this one go.
`Times::times` was always a second-class loop because it did not support the `break` and `continue` operations. Its playful appeal was then lost after `do` was disabled for closures. It's time to let this one go.
I found awkward to have `MutableCloneableVector` and `CloneableIterator` on the one hand, and `CopyableVector` etc. on the other hand.
The concerned traits are:
* `CopyableVector` --> `CloneableVector`
* `OwnedCopyableVector` --> `OwnedCloneableVector`
* `ImmutableCopyableVector` --> `ImmutableCloneableVector`
* `CopyableTuple` --> `CloneableTuple`
The general consensus is that we want to move away from conditions for I/O, and I propose a two-step plan for doing so:
1. Warn about unused `Result` types. When all of I/O returns `Result`, it will require you inspect the return value for an error *only if* you have a result you want to look at. By default, for things like `write` returning `Result<(), Error>`, these will all go silently ignored. This lint will prevent blind ignorance of these return values, letting you know that there's something you should do about them.
2. Implement a `try!` macro:
```
macro_rules! try( ($e:expr) => (match $e { Ok(e) => e, Err(e) => return Err(e) }) )
```
With these two tools combined, I feel that we get almost all the benefits of conditions. The first step (the lint) is a sanity check that you're not ignoring return values at callsites. The second step is to provide a convenience method of returning early out of a sequence of computations. After thinking about this for awhile, I don't think that we need the so-called "do-notation" in the compiler itself because I think it's just *too* specialized. Additionally, the `try!` macro is super lightweight, easy to understand, and works almost everywhere. As soon as you want to do something more fancy, my answer is "use match".
Basically, with these two tools in action, I would be comfortable removing conditions. What do others think about this strategy?
----
This PR specifically implements the `unused_result` lint. I actually added two lints, `unused_result` and `unused_must_use`, and the first commit has the rationale for why `unused_result` is turned off by default.
These are either returned from public functions, and really should
appear in the documentation, but don't since they're private, or are
implementation details that are currently public.
These are either returned from public functions, and really should
appear in the documentation, but don't since they're private, or are
implementation details that are currently public.
Native timers are a much hairier thing to deal with than green timers due to the
interface that we would like to expose (both a blocking sleep() and a
channel-based interface). I ended up implementing timers in three different ways
for the various platforms that we supports.
In all three of the implementations, there is a worker thread which does send()s
on channels for timers. This worker thread is initialized once and then
communicated to in a platform-specific manner, but there's always a shared
channel available for sending messages to the worker thread.
* Windows - I decided to use windows kernel timer objects via
CreateWaitableTimer and SetWaitableTimer in order to provide sleeping
capabilities. The worker thread blocks via WaitForMultipleObjects where one of
the objects is an event that is used to wake up the helper thread (which then
drains the incoming message channel for requests).
* Linux/(Android?) - These have the ideal interface for implementing timers,
timerfd_create. Each timer corresponds to a timerfd, and the helper thread
uses epoll to wait for all active timers and then send() for the next one that
wakes up. The tricky part in this implementation is updating a timerfd, but
see the implementation for the fun details
* OSX/FreeBSD - These obviously don't have the windows APIs, and sadly don't
have the timerfd api available to them, so I have thrown together a solution
which uses select() plus a timeout in order to ad-hoc-ly implement a timer
solution for threads. The implementation is backed by a sorted array of timers
which need to fire. As I said, this is an ad-hoc solution which is certainly
not accurate timing-wise. I have done this implementation due to the lack of
other primitives to provide an implementation, and I've done it the best that
I could, but I'm sure that there's room for improvement.
I'm pretty happy with how these implementations turned out. In theory we could
drop the timerfd implementation and have linux use the select() + timeout
implementation, but it's so inaccurate that I would much rather continue to use
timerfd rather than my ad-hoc select() implementation.
The only change that I would make to the API in general is to have a generic
sleep() method on an IoFactory which doesn't require allocating a Timer object.
For everything but windows it's super-cheap to request a blocking sleep for a
set amount of time, and it's probably worth it to provide a sleep() which
doesn't do something like allocate a file descriptor on linux.
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
cc #11119
* Reexport io::mem and io::buffered structs directly under io, make mem/buffered
private modules
* Remove with_mem_writer
* Remove DEFAULT_CAPACITY and use DEFAULT_BUF_SIZE (in io::buffered)
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
Instead of reading a byte at a time in a loop we copy the relevant bytes into
a temporary vector of size eight. We can then read the value from the temporary
vector using a single u64 read. LLVM seems to be able to optimize this
almost scarily good.
This is just an unnecessary trait that no one's ever going to parameterize over
and it's more useful to just define the methods directly on the types
themselves. The implementors of this type almost always don't want
inner_mut_ref() but they're forced to define it as well.
This will allow capturing of common things like logging messages, stdout prints
(using stdio println), and failure messages (printed to stderr). Any new prints
added to libstd should be funneled through these task handles to allow capture
as well.
Additionally, this commit redirects logging back through a `Logger` trait so the
log level can be usefully consumed by an arbitrary logger.
This commit also introduces methods to set the task-local stdout handles:
* std::io::stdio::set_stdout
* std::io::stdio::set_stderr
* std::io::logging::set_logger
These methods all return the previous logger just in case it needs to be used
for inspection.
I plan on using this infrastructure for extra::test soon, but we don't quite
have the primitives that I'd like to use for it, so it doesn't migrate
extra::test at this time.
Closes#6369
Similarly to the recent commit to do this for networking, there's no reason that
a read on a file descriptor should continue reading until the entire buffer is
full. This makes sense when dealing with literal files, but when dealing with
things like stdin this doesn't make sense.
This will allow capturing of common things like logging messages, stdout prints
(using stdio println), and failure messages (printed to stderr). Any new prints
added to libstd should be funneled through these task handles to allow capture
as well.
Additionally, this commit redirects logging back through a `Logger` trait so the
log level can be usefully consumed by an arbitrary logger.
This commit also introduces methods to set the task-local stdout handles:
* std::io::stdio::set_stdout
* std::io::stdio::set_stderr
* std::io::logging::set_logger
These methods all return the previous logger just in case it needs to be used
for inspection.
I plan on using this infrastructure for extra::test soon, but we don't quite
have the primitives that I'd like to use for it, so it doesn't migrate
extra::test at this time.
Closes#6369
libnative erroneously would attempt to fill the entire buffer in a call to
`read` before returning, when rather it should return immediately because
there's not guaranteed to be any data that will ever be received again.
Close#11328
libnative erroneously would attempt to fill the entire buffer in a call to
`read` before returning, when rather it should return immediately because
there's not guaranteed to be any data that will ever be received again.
Close#11328
These methods are sorely needed on readers and writers, and I believe that the
encoding story should be solved with composition. This commit adds back the
missed functions when reading/writing strings onto generic Readers/Writers.
The old `rtio-processes` run-pass test is now moved into libstd's `io::process` module, and all process and TCP tests are now run with `iotest!` (both a native and a green version are tested).
All TCP networking on windows is provided by `ws2_32` which is apparently very similar to unix networking (hurray!).
Move the tests into libstd, use the `iotest!` macro to test both native and uv
bindings, and use the cloexec trick to figure out when the child process fails
in exec.
This patch changes `result::collect` (and adds a new `option::collect`) from creating a `~[T]` to take an `Iterator`. This makes the function much more flexible, and may replace the need for #10989.
This patch is a little more complicated than it needs to be because of #11084. Once that is fixed we can replace the `CollectIterator` with a `Scan` iterator.
It also fixes a test warning.
* vec::raw::to_ptr is gone
* Pausible => Pausable
* Removing @
* Calling the main task "<main>"
* Removing unused imports
* Removing unused mut
* Bringing some libextra tests up to date
* Allowing compiletest to work at stage0
* Fixing the bootstrap-from-c rmake tests
* assert => rtassert in a few cases
* printing to stderr instead of stdout in fail!()
Note that this removes a number of run-pass tests which are exercising behavior
of the old runtime. This functionality no longer exists and is thoroughly tested
inside of libgreen and libnative. There isn't really the notion of "starting the
runtime" any more. The major notion now is "bootstrapping the initial task".
All tests except for the homing tests are now working again with the
librustuv/libgreen refactoring. The homing-related tests are currently commented
out and now placed in the rustuv::homing module.
I plan on refactoring scheduler pool spawning in order to enable more homing
tests in a future commit.
This commit introduces a new crate called "native" which will be the crate that
implements the 1:1 runtime of rust. This currently entails having an
implementation of std::rt::Runtime inside of libnative as well as moving all of
the native I/O implementations to libnative.
The current snag is that the start lang item must currently be defined in
libnative in order to start running, but this will change in the future.
Cool fact about this crate, there are no extra features that are enabled.
Note that this commit does not include any makefile support necessary for
building libnative, that's all coming in a later commit.
Printing is an incredibly useful debugging utility, and it's not much help if
your debugging prints just trigger an obscure abort when you need them most. In
order to handle this case, forcibly fall back to a libc::write implementation of
printing whenever a local task is not available.
Note that this is *not* a 1:1 fallback. All 1:1 rust tasks will still have a
local Task that it can go through (and stdio will be created through the local
IO factory), this is only a fallback for "no context" rust code (such as that
setting up the context).
It is not the case that all programs will always be able to acquire an instance
of the LocalIo borrow, so this commit exposes this limitation by returning
Option<LocalIo> from LocalIo::borrow().
At the same time, a helper method LocalIo::maybe_raise() has been added in order
to encapsulate the functionality of raising on io_error if there is on local I/O
available.
This module contains many M:N specific concepts. This will no longer be
available with libgreen, and most functions aren't really that necessary today
anyway. New testing primitives will be introduced as they become available for
1:1 and M:N.
A new io::test module is introduced with the new ip4/ip6 address helpers to
continue usage in io tests.
Could prevent callers from catching the situation and lead to e.g early
iterator terminations (cf. `Reader::read_byte`) since `None` is only to
be returned only on EOF.
Could prevent callers from catching the situation and lead to e.g early
iterator terminations (cf. `Reader::read_byte') since `None' is only to
be returned only on EOF.
This adds a bunch of useful Reader and Writer implementations. I'm not a
huge fan of the name `util` but I can't think of a better name and I
don't want to make `std::io` any longer than it already is.
This adds a bunch of useful Reader and Writer implementations. I'm not a
huge fan of the name `util` but I can't think of a better name and I
don't want to make `std::io` any longer than it already is.
- `Buffer.lines()` returns `LineIterator` which yields line using
`.read_line()`.
- `Reader.bytes()` now takes `&mut self` instead of `self`.
- `Reader.read_until()` swallows `EndOfFile`. This also affects
`.read_line()`.
This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
BufferedWriter::inner flushes before returning the underlying writer.
BufferedWriter::write no longer flushes the underlying writer.
LineBufferedWriter::write flushes up to the *last* newline in the input
string, not the first.
BufferedWriter::inner flushes before returning the underlying writer.
BufferedWriter::write no longer flushes the underlying writer.
LineBufferedWriter::write flushes up to the *last* newline in the input
string, not the first.
This implements a fair amount of the unimpl() functionality in io::native
relating to filesystem operations. I've also modified all io::fs tests to run in
both a native and uv environment (so everything is actually tested).
There are a few bits of remaining functionality which I was unable to get
working:
* truncate on windows
* change_file_times on windows
* lstat on windows
I think that change_file_times may just need a better interface, but the other
two have large implementations in libuv which I didn't want to tackle trying to
copy. I found a `chsize` function to work for truncate on windows, but it
doesn't quite seem to be working out.
This implements a fair amount of the unimpl() functionality in io::native
relating to filesystem operations. I've also modified all io::fs tests to run in
both a native and uv environment (so everything is actually tested).
There are a two bits of remaining functionality which I was unable to get
working:
* change_file_times on windows
* lstat on windows
I think that change_file_times may just need a better interface, but lstat has a
large implementation in libuv which I didn't want to tackle trying to copy.
There are issues with reading stdin when it is actually attached to a pipe, but
I have run into no problems in writing to stdout/stderr when they are attached
to pipes.
There are issues with reading stdin when it is actually attached to a pipe, but
I have run into no problems in writing to stdout/stderr when they are attached
to pipes.
These commits create a `Buffer` trait in the `io` module which represents an I/O reader which is internally buffered. This abstraction is used to reasonably implement `read_line` and `read_until` along with at least an ok implementation of `read_char` (although I certainly haven't benchmarked `read_char`).
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)