This ends up saving a single `call` instruction in the optimised code,
but saves a few hundred lines of non-optimised IR for `fn main() {
fail!("foo {}", "bar"); }` (comparing against the minimal generic
baseline from the parent commit).
This splits the vast majority of the code path taken by
`fail!()` (`begin_unwind`) into a separate non-generic inline(never)
function, so that uses of `fail!()` only monomorphise a small amount of
code, reducing code bloat and making very small crates compile faster.
The following are renamed:
* `min_value` => `MIN`
* `max_value` => `MAX`
* `bits` => `BITS`
* `bytes` => `BYTES`
All tests pass, except for `run-pass/phase-syntax-link-does-resolve.rs`. I doubt that failure is related, though.
Fixes#10010.
This is just an initial implementation and does not yet fully replace `~[T]`. A generic initialization syntax for containers is missing, and the slice functionality needs to be reworked to make auto-slicing unnecessary.
Traits for supporting indexing properly are also required. This also needs to be fixed to make ring buffers as easy to use as vectors.
The tests and documentation for `~[T]` can be ported over to this type when it is removed. I don't really expect DST to happen for vectors as having both `~[T]` and `Vec<T>` is overcomplicated and changing the slice representation to 3 words is not at all appealing. Unlike with traits, it's possible (and easy) to implement `RcSlice<T>` and `GcSlice<T>` without compiler help.
Native timers are a much hairier thing to deal with than green timers due to the
interface that we would like to expose (both a blocking sleep() and a
channel-based interface). I ended up implementing timers in three different ways
for the various platforms that we supports.
In all three of the implementations, there is a worker thread which does send()s
on channels for timers. This worker thread is initialized once and then
communicated to in a platform-specific manner, but there's always a shared
channel available for sending messages to the worker thread.
* Windows - I decided to use windows kernel timer objects via
CreateWaitableTimer and SetWaitableTimer in order to provide sleeping
capabilities. The worker thread blocks via WaitForMultipleObjects where one of
the objects is an event that is used to wake up the helper thread (which then
drains the incoming message channel for requests).
* Linux/(Android?) - These have the ideal interface for implementing timers,
timerfd_create. Each timer corresponds to a timerfd, and the helper thread
uses epoll to wait for all active timers and then send() for the next one that
wakes up. The tricky part in this implementation is updating a timerfd, but
see the implementation for the fun details
* OSX/FreeBSD - These obviously don't have the windows APIs, and sadly don't
have the timerfd api available to them, so I have thrown together a solution
which uses select() plus a timeout in order to ad-hoc-ly implement a timer
solution for threads. The implementation is backed by a sorted array of timers
which need to fire. As I said, this is an ad-hoc solution which is certainly
not accurate timing-wise. I have done this implementation due to the lack of
other primitives to provide an implementation, and I've done it the best that
I could, but I'm sure that there's room for improvement.
I'm pretty happy with how these implementations turned out. In theory we could
drop the timerfd implementation and have linux use the select() + timeout
implementation, but it's so inaccurate that I would much rather continue to use
timerfd rather than my ad-hoc select() implementation.
The only change that I would make to the API in general is to have a generic
sleep() method on an IoFactory which doesn't require allocating a Timer object.
For everything but windows it's super-cheap to request a blocking sleep for a
set amount of time, and it's probably worth it to provide a sleep() which
doesn't do something like allocate a file descriptor on linux.
This routine is currently only used to clean up the timer helper thread in the
libnative implementation, but there are possibly other uses for this.
The documentation is clear that the procedures are *not* run with any task
context and hence have very little available to them. I also opted to disallow
at_exit inside of at_exit and just abort the process at that point.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
The failure functions are generic, meaning they're candidates for getting
inlined across crates. This has been happening, leading to monstrosities like
that found in #11549. I have verified that the codegen is *much* better now that
we're not inlining the failure path (the slow path).
The failure functions are generic, meaning they're candidates for getting
inlined across crates. This has been happening, leading to monstrosities like
that found in #11549. I have verified that the codegen is *much* better now that
we're not inlining the failure path (the slow path).
This will allow capturing of common things like logging messages, stdout prints
(using stdio println), and failure messages (printed to stderr). Any new prints
added to libstd should be funneled through these task handles to allow capture
as well.
Additionally, this commit redirects logging back through a `Logger` trait so the
log level can be usefully consumed by an arbitrary logger.
This commit also introduces methods to set the task-local stdout handles:
* std::io::stdio::set_stdout
* std::io::stdio::set_stderr
* std::io::logging::set_logger
These methods all return the previous logger just in case it needs to be used
for inspection.
I plan on using this infrastructure for extra::test soon, but we don't quite
have the primitives that I'd like to use for it, so it doesn't migrate
extra::test at this time.
Closes#6369
This will allow capturing of common things like logging messages, stdout prints
(using stdio println), and failure messages (printed to stderr). Any new prints
added to libstd should be funneled through these task handles to allow capture
as well.
Additionally, this commit redirects logging back through a `Logger` trait so the
log level can be usefully consumed by an arbitrary logger.
This commit also introduces methods to set the task-local stdout handles:
* std::io::stdio::set_stdout
* std::io::stdio::set_stderr
* std::io::logging::set_logger
These methods all return the previous logger just in case it needs to be used
for inspection.
I plan on using this infrastructure for extra::test soon, but we don't quite
have the primitives that I'd like to use for it, so it doesn't migrate
extra::test at this time.
Closes#6369
This reverts commit f1b5f59287.
Using a private function of a library is a bad idea: several people (on
Linux) were meeting with linking errors because of it (different/older
versions of glibc).
This removes the feature where newtype structs can be dereferenced like pointers, and likewise where certain enums can be dereferenced (which I imagine nobody realized still existed). This ad-hoc behavior is to be replaced by a more general overloadable dereference trait in the future.
I've been nursing this patch for two months and think it's about rebased up to master.
@nikomatsakis this makes a bunch of your type checking code noticeably uglier.
If there is a lot of data in thread-local storage some implementations
of pthreads (e.g. glibc) fail if you don't request a stack large enough
-- by adjusting for the minimum size we guarantee that our stacks are
always large enough. Issue #6233.
Previously this was an `rtabort!`, indicating a runtime bug. Promote
this to a more intentional abort and print a (slightly) more
informative error message.
Can't test this sense our test suite can't handle an abort exit.
I consider this to close#910, and that we should open another issue about implementing less conservative semantics here.
If there is a lot of data in thread-local storage some implementations
of pthreads (e.g. glibc) fail if you don't request a stack large enough
-- by adjusting for the minimum size we guarantee that our stacks are
always large enough. Issue #6233.
Previously this was an rtabort!, indicating a runtime bug. Promote
this to a more intentional abort and print a (slightly) more
informative error message.
Can't test this sense our test suite can't handle an abort exit.
For libgreen, bookeeping should not be global but rather on a per-pool basis.
Inside libnative, it's known that there must be a global counter with a
mutex/cvar.
The benefit of taking this strategy is to remove this functionality from libstd
to allow fine-grained control of it through libnative/libgreen. Notably, helper
threads in libnative can manually decrement the global count so they don't count
towards the global count of threads. Also, the shutdown process of *all* sched
pools is now dependent on the number of tasks in the pool being 0 rather than
this only being a hardcoded solution for the initial sched pool in libgreen.
This involved adding a Local::try_take() method on the Local trait in order for
the channel wakeup to work inside of libgreen. The channel send was happening
from a SchedTask when there is no Task available in TLS, and now this is
possible to work (remote wakeups are always possible, just a little slower).
For libgreen, bookeeping should not be global but rather on a per-pool basis.
Inside libnative, it's known that there must be a global counter with a
mutex/cvar.
The benefit of taking this strategy is to remove this functionality from libstd
to allow fine-grained control of it through libnative/libgreen. Notably, helper
threads in libnative can manually decrement the global count so they don't count
towards the global count of threads. Also, the shutdown process of *all* sched
pools is now dependent on the number of tasks in the pool being 0 rather than
this only being a hardcoded solution for the initial sched pool in libgreen.
This involved adding a Local::try_take() method on the Local trait in order for
the channel wakeup to work inside of libgreen. The channel send was happening
from a SchedTask when there is no Task available in TLS, and now this is
possible to work (remote wakeups are always possible, just a little slower).
This commit uniforms the short title of modules provided by libstd,
in order to make their roles more explicit when glancing at the index.
Signed-off-by: Luca Bruno <lucab@debian.org>
* vec::raw::to_ptr is gone
* Pausible => Pausable
* Removing @
* Calling the main task "<main>"
* Removing unused imports
* Removing unused mut
* Bringing some libextra tests up to date
* Allowing compiletest to work at stage0
* Fixing the bootstrap-from-c rmake tests
* assert => rtassert in a few cases
* printing to stderr instead of stdout in fail!()
There was a race in the code previously where schedulers would *immediately*
shut down after spawning the main task (because the global task count would
still be 0). This fixes the logic by blocking the sched pool task in receving on
a port instead of spawning a task into the pool to receive on a port.
The modifications necessary were to have a "simple task" running by the time the
code is executing, but this is a simple enough thing to implement and I forsee
this being necessary to have implemented in the future anyway.
Note that this removes a number of run-pass tests which are exercising behavior
of the old runtime. This functionality no longer exists and is thoroughly tested
inside of libgreen and libnative. There isn't really the notion of "starting the
runtime" any more. The major notion now is "bootstrapping the initial task".
This allows creation of different sched pools with different io factories.
Namely, this will be used to test the basic I/O loop in the green crate. This
can also be used to override the global default.
The scheduler pool now has a much more simplified interface. There is now a
clear distinction between creating the pool and then interacting the pool. When
a pool is created, all schedulers are not active, and only later if a spawn is
done does activity occur.
There are four operations that you can do on a pool:
1. Create a new pool. The only argument to this function is the configuration
for the scheduler pool. Currently the only configuration parameter is the
number of threads to initially spawn.
2. Spawn a task into this pool. This takes a procedure and task configuration
options and spawns a new task into the pool of schedulers.
3. Spawn a new scheduler into the pool. This will return a handle on which to
communicate with the scheduler in order to do something like a pinned task.
4. Shut down the scheduler pool. This will consume the scheduler pool, request
all of the schedulers to shut down, and then wait on all the scheduler
threads. Currently this will block the invoking OS thread, but I plan on
making 'Thread::join' not a thread-blocking call.
These operations can be used to encode all current usage of M:N schedulers, as
well as providing a simple interface through which a pool can be modified. There
is currently no way to remove a scheduler from a pool of scheduler, as there's
no way to guarantee that a scheduler has exited. This may be added in the
future, however (as necessary).
In the compiled version of local_ptr (that with #[thread_local]), the take()
funciton didn't zero-out the previous pointer, allowing for multiple takes (with
fewer runtime assertions being tripped).
This extracts everything related to green scheduling from libstd and introduces
a new libgreen crate. This mostly involves deleting most of std::rt and moving
it to libgreen.
Along with the movement of code, this commit rearchitects many functions in the
scheduler in order to adapt to the fact that Local::take now *only* works on a
Task, not a scheduler. This mostly just involved threading the current green
task through in a few locations, but there were one or two spots where things
got hairy.
There are a few repercussions of this commit:
* tube/rc have been removed (the runtime implementation of rc)
* There is no longer a "single threaded" spawning mode for tasks. This is now
encompassed by 1:1 scheduling + communication. Convenience methods have been
introduced that are specific to libgreen to assist in the spawning of pools of
schedulers.
Printing is an incredibly useful debugging utility, and it's not much help if
your debugging prints just trigger an obscure abort when you need them most. In
order to handle this case, forcibly fall back to a libc::write implementation of
printing whenever a local task is not available.
Note that this is *not* a 1:1 fallback. All 1:1 rust tasks will still have a
local Task that it can go through (and stdio will be created through the local
IO factory), this is only a fallback for "no context" rust code (such as that
setting up the context).
It is not the case that all programs will always be able to acquire an instance
of the LocalIo borrow, so this commit exposes this limitation by returning
Option<LocalIo> from LocalIo::borrow().
At the same time, a helper method LocalIo::maybe_raise() has been added in order
to encapsulate the functionality of raising on io_error if there is on local I/O
available.
For now, this moves the following modules to std::sync
* UnsafeArc (also removed unwrap method)
* mpsc_queue
* spsc_queue
* atomics
* mpmc_bounded_queue
* deque
We may want to remove some of the queues, but for now this moves things out of
std::rt into std::sync
This module contains many M:N specific concepts. This will no longer be
available with libgreen, and most functions aren't really that necessary today
anyway. New testing primitives will be introduced as they become available for
1:1 and M:N.
A new io::test module is introduced with the new ip4/ip6 address helpers to
continue usage in io tests.
This trait is used to abstract the differences between 1:1 and M:N scheduling
and is the sole dispatch point for the differences between these two scheduling
modes.
This, and the following series of commits, is not intended to compile. Only
after the entire transition is complete are programs expected to compile.
For `str.as_mut_buf`, un-closure-ification is achieved by outright removal (see commit message). The others are replaced by `.as_ptr`, `.as_mut_ptr` and `.len`
This code in resolve accidentally forced all types with an impl to become
public. This fixes it by default inheriting the privacy of what was previously
there and then becoming `true` if nothing else exits.
Closes#10545
* Streams are now ~3x faster than before (fewer allocations and more optimized)
* Based on a single-producer single-consumer lock-free queue that doesn't
always have to allocate on every send.
* Blocking via mutexes/cond vars outside the runtime
* Streams work in/out of the runtime seamlessly
* Select now works in/out of the runtime seamlessly
* Streams will now fail!() on send() if the other end has hung up
* try_send() will not fail
* PortOne/ChanOne removed
* SharedPort removed
* MegaPipe removed
* Generic select removed (only one kind of port now)
* API redesign
* try_recv == never block
* recv_opt == block, don't fail
* iter() == Iterator<T> for Port<T>
* removed peek
* Type::new
* Removed rt::comm
This replaces the link meta attributes with a pkgid attribute and uses a hash
of this as the crate hash. This makes the crate hash computable by things
other than the Rust compiler. It also switches the hash function ot SHA1 since
that is much more likely to be available in shell, Python, etc than SipHash.
Fixes#10188, #8523.
This implements parts of the changes to `Result` and `Option` I proposed and discussed in this thread: https://mail.mozilla.org/pipermail/rust-dev/2013-November/006254.html
This PR includes:
- Adding `ok()` and `err()` option adapters for both `Result` variants.
- Removing `get_ref`, `expect` and iterator constructors for `Result`, as they are reachable with the variant adapters.
- Removing `Result`s `ToStr` bound on the error type because of composability issues. (See https://mail.mozilla.org/pipermail/rust-dev/2013-November/006283.html)
- Some warning cleanups
In order to keep up to date with changes to the libraries that `llvm-config`
spits out, the dependencies to the LLVM are a dynamically generated rust file.
This file is now automatically updated whenever LLVM is updated to get kept
up-to-date.
At the same time, this cleans out some old cruft which isn't necessary in the
makefiles in terms of dependencies.
Closes#10745Closes#10744
Right now, as pointed out in #8132, it is very easy to introduce a subtle race
in the runtime. I believe that this is the cause of the current flakiness on the
bots.
I have taken the last idea mentioned in that issue which is to use a lock around
descheduling and context switching in order to solve this race.
Closes#8132
Right now, as pointed out in #8132, it is very easy to introduce a subtle race
in the runtime. I believe that this is the cause of the current flakiness on the
bots.
I have taken the last idea mentioned in that issue which is to use a lock around
descheduling and context switching in order to solve this race.
Closes#8132
This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
This registers new snapshots after the landing of #10528, and then goes on to tweak the build process to build a monolithic `rustc` binary for use in future snapshots. This mainly involved dropping the dynamic dependency on `librustllvm`, so that's now built as a static library (with a dynamically generated rust file listing LLVM dependencies).
This currently doesn't actually make the snapshot any smaller (24MB => 23MB), but I noticed that the executable has 11MB of metadata so once progress is made on #10740 we should have a much smaller snapshot.
There's not really a super-compelling reason to distribute just a binary because we have all the infrastructure for dealing with a directory structure, but to me it seems "more correct" that a snapshot compiler is just a `rustc` binary.
- Removed module reexport workaround for the integer module macros
- Removed legacy reexports of `cmp::{min, max}` in the integer module macros
- Combined a few macros in `vec` into one
- Documented a few issues
This adds an implementation of the Chase-Lev work-stealing deque to libstd
under std::rt::deque. I've been unable to break the implementation of the deque
itself, and it's not super highly optimized just yet (everything uses a SeqCst
memory ordering).
The major snag in implementing the chase-lev deque is that the buffers used to
store data internally cannot get deallocated back to the OS. In the meantime, a
shared buffer pool (synchronized by a normal mutex) is used to
deallocate/allocate buffers from. This is done in hope of not overcommitting too
much memory. It is in theory possible to eventually free the buffers, but one
must be very careful in doing so.
I was unable to get some good numbers from src/test/bench tests (I don't think
many of them are slamming the work queue that much), but I was able to get some
good numbers from one of my own tests. In a recent rewrite of select::select(),
I found that my implementation was incredibly slow due to contention on the
shared work queue. Upon switching to the parallel deque, I saw the contention
drop to 0 and the runtime go from 1.6s to 0.9s with the most amount of time
spent in libuv awakening the schedulers (plus allocations).
Closes#4877
* Added doc comments explaining what all public functionality does.
* Added the ability to spawn a detached thread
* Added the ability for the procs to return a value in 'join'
Whenever the runtime is shut down, add a few hooks to clean up some of the
statically initialized data of the runtime. Note that this is an unsafe
operation because there's no guarantee on behalf of the runtime that there's no
other code running which is using the runtime.
This helps turn down the noise a bit in the valgrind output related to
statically initialized mutexes. It doesn't turn the noise down to 0 because
there are still statically initialized mutexes in dynamic_lib and
os::with_env_lock, but I believe that it would be easy enough to add exceptions
for those cases and I don't think that it's the runtime's job to go and clean up
that data.
This patchset fixes some parts broken on Win64.
This also adds `--disable-pthreads` flags to llvm on mingw-w64 archs (both 32-bit and 64-bit, not mingw) due to bad performance. See #8996 for discussion.
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
I cannot tell whether the original comment was unsure about the
arithmetic calculations, or if it was unsure about the assumptions
being made about the alignment of the current allocation pointer.
The arithmetic calculation looks fine to me, though. This technique
is documented e.g. in Henry Warren's "Hacker's Delight" (section 3-1).
(I am sure one can find it elsewhere too, its not an obscure
property.)
There are issues with reading stdin when it is actually attached to a pipe, but
I have run into no problems in writing to stdout/stderr when they are attached
to pipes.
Explicitly have the only C++ portion of the runtime be one file with exception
handling. All other runtime files must now live in C and be fully defined in C.
There are issues with reading stdin when it is actually attached to a pipe, but
I have run into no problems in writing to stdout/stderr when they are attached
to pipes.
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)
I was benchmarking rust-http recently, and I saw that 50% of its time was spent
creating buffered readers/writers. Albeit rust-http wasn't using
std::rt::io::buffered, but the same idea applies here. It's much cheaper to
malloc a large region and not initialize it than to set it all to 0. Buffered
readers/writers never use uninitialized data, and their internal buffers are
encapsulated, so any usage of uninitialized slots are an implementation bug in
the readers/writers.
I increased this to 4MB when I implemented abort-on-stack-overflow for Rust
functions. Now that the fixed_stack_segment attribute is removed, no rust
function will ever reasonably request 2MB of stack (due to calling an FFI
function).
The default size of 2MB should be plenty for everyday use-cases, and tasks can
still request more stack via the spawning API.
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
I was benchmarking rust-http recently, and I saw that 50% of its time was spent
creating buffered readers/writers. Albeit rust-http wasn't using
std::rt::io::buffered, but the same idea applies here. It's much cheaper to
malloc a large region and not initialize it than to set it all to 0. Buffered
readers/writers never use uninitialized data, and their internal buffers are
encapsulated, so any usage of uninitialized slots are an implementation bug in
the readers/writers.
It appears that uv's support for interacting with a stdio stream as a tty when
it's actually a pipe is pretty problematic. To get around this, promote a check
to see if the stream is a tty to the top of the tty constructor, and bail out
quickly if it's not identified as a tty.
Closes#10237
It turns out that the uv implementation would cause use-after-free if the idle
callback was used after the call to `close`, and additionally nothing would ever
really work that well if `start()` were called twice. To change this, the
`start` and `close` methods were removed in favor of specifying the callback at
creation, and allowing destruction to take care of closing the watcher.
This binds to the appropriate pthreads_* and Windows specific functions
and calls them from Rust. This allows for removal of the C++ support
code for threads.
Fixes#10162
Right now if you're running a program with its output piped to some location and
the program decides to go awry, when you kill the program via some signal none
of the program's last 4K of output will get printed to the screen. In theory the
solution to this would be to register a signal handler as part of the runtime
which then flushes the output stream.
I believe that the current behavior is far enough from what's expected that we
shouldn't be providing this sort of "super buffering" by default when stdout
isn't attached to a tty.
This isn't quite as fancy as the struct in #9913, but I'm not sure we should be exposing crate names/hashes of the types. That being said, it'd be pretty easy to extend this (the deterministic hashing regardless of what crate you're in was the hard part).
Right now if you're running a program with its output piped to some location and
the program decides to go awry, when you kill the program via some signal none
of the program's last 4K of output will get printed to the screen. In theory the
solution to this would be to register a signal handler as part of the runtime
which then flushes the output stream.
I believe that the current behavior is far enough from what's expected that we
shouldn't be providing this sort of "super buffering" by default when stdout
isn't attached to a tty.
This renames the `file` module to `fs` because that more accurately describes
its current purpose (manipulating the filesystem, not just files).
Additionally, this adds an UnstableFileStat structure as a nested structure of
FileStat to signify that the fields should not be depended on. The structure is
currently flagged with #[unstable], but it's unlikely that it has much meaning.
Closes#10241
This adds bindings to the remaining functions provided by libuv, all of which
are useful operations on files which need to get exposed somehow.
Some highlights:
* Dropped `FileReader` and `FileWriter` and `FileStream` for one `File` type
* Moved all file-related methods to be static methods under `File`
* All directory related methods are still top-level functions
* Created `io::FilePermission` types (backed by u32) that are what you'd expect
* Created `io::FileType` and refactored `FileStat` to use FileType and
FilePermission
* Removed the expanding matrix of `FileMode` operations. The mode of reading a
file will not have the O_CREAT flag, but a write mode will always have the
O_CREAT flag.
Closes#10130Closes#10131Closes#10121
This commit moves all thread-blocking I/O functions from the std::os module.
Their replacements can be found in either std::rt::io::file or in a hidden
"old_os" module inside of native::file. I didn't want to outright delete these
functions because they have a lot of special casing learned over time for each
OS/platform, and I imagine that these will someday get integrated into a
blocking implementation of IoFactory. For now, they're moved to a private module
to prevent bitrot and still have tests to ensure that they work.
I've also expanded the extensions to a few more methods defined on Path, most of
which were previously defined in std::os but now have non-thread-blocking
implementations as part of using the current IoFactory.
The api of io::file is in flux, but I plan on changing it in the next commit as
well.
Closes#10057
The invocation for making a directory should be able to specify a mode to make
the directory with (instead of defaulting to one particular mode). Additionally,
libuv and various OSes implement efficient versions of renaming files, so this
operation is exposed as an IoFactory call.
Now that the type_id intrinsic is working across crates, all of these
unnecessary messages can be removed to have the failure type for a task truly be
~Any and only ~Any
Tests now have the same name as the test that they're running (to allow for
easier diagnosing of failure sources), and the main task is now specially named
`<main>` instead of `<unnamed>`.
Closes#10195Closes#10073
This takes the last reforms on the `Option` type and applies them to `Result` too. For that, I reordered and grouped the functions in both modules, and also did some refactorings:
- Added `as_ref` and `as_mut` adapters to `Result`.
- Renamed `Result::map_move` to `Result::map` (same for `_err` variant), deleted other map functions.
- Made the `.expect()` methods be generic over anything you can
fail with.
- Updated some doc comments to the line doc comment style
- Cleaned up and extended standard trait implementations on `Option` and `Result`
- Removed legacy implementations in the `option` and `result` module
Tests now have the same name as the test that they're running (to allow for
easier diagnosing of failure sources), and the main task is now specially named
<main> instead of <unnamed>.
Closes#10195Closes#10073
The previous method was unsound because you could very easily create two mutable
pointers which alias the same location (not sound behavior). This hides the
function which does so and then exports an explicit flush() function (with
documentation about how it works).
Cleaned up the source in a few places
Renamed `map_move` to `map`, removed other `map` methods
Added `as_ref` and `as_mut` adapters to `Result`
Added `fmt::Default` impl
The previous method was unsound because you could very easily create two mutable
pointers which alias the same location (not sound behavior). This hides the
function which does so and then exports an explicit flush() function (with
documentation about how it works).
- `begin_unwind` and `fail!` is now generic over any `T: Any + Send`.
- Every value you fail with gets boxed as an `~Any`.
- Because of implementation issues, `&'static str` and `~str` are still
handled specially behind the scenes.
- Changed the big macro source string in libsyntax to a raw string
literal, and enabled doc comments there.
- `begin_unwind` is now generic over any `T: Any + Send`.
- Every value you fail with gets boxed as an `~Any`.
- Because of implementation details, `&'static str` and `~str` are still
handled specially behind the scenes.
- Changed the big macro source string in libsyntax to a raw string
literal, and enabled doc comments there.
Allows an enum with a discriminant to use any of the primitive integer types to store it. By default the smallest usable type is chosen, but this can be overridden with an attribute: `#[repr(int)]` etc., or `#[repr(C)]` to match the target's C ABI for the equivalent C enum.
Also adds a lint pass for using non-FFI safe enums in extern declarations, checks that specified discriminants can be stored in the specified type if any, and fixes assorted code that was assuming int.
This is one of the final steps needed to complete #9128. It still needs a little bit of polish before closing that issue, but it's in a pretty much "done" state now.
The idea here is that the entire event loop implementation using libuv is now housed in `librustuv` as a completely separate library. This library is then injected (via `extern mod rustv`) into executable builds (similarly to how libstd is injected, tunable via `#[no_uv]`) to bring in the "rust blessed event loop implementation."
Codegen-wise, there is a new `event_loop_factory` language item which is tagged on a function with 0 arguments returning `~EventLoop`. This function's symbol is then inserted into the crate map for an executable crate, and if there is no definition of the `event_loop_factory` language item then the value is null.
What this means is that embedding rust as a library in another language just got a little harder. Libraries don't have crate maps, which means that there's no way to find the event loop implementation to spin up the runtime. That being said, it's always possible to build the runtime manually. This request also makes more runtime components public which should probably be public anyway. This new public-ness should allow custom scheduler setups everywhere regardless of whether you follow the `rt::start `path.
There are a few reasons that this is a desirable move to take:
1. Proof of concept that a third party event loop is possible
2. Clear separation of responsibility between rt::io and the uv-backend
3. Enforce in the future that the event loop is "pluggable" and replacable
Here's a quick summary of the points of this pull request which make this
possible:
* Two new lang items were introduced: event_loop, and event_loop_factory.
The idea of a "factory" is to define a function which can be called with no
arguments and will return the new event loop as a trait object. This factory
is emitted to the crate map when building an executable. The factory doesn't
have to exist, and when it doesn't then an empty slot is in the crate map and
a basic event loop with no I/O support is provided to the runtime.
* When building an executable, then the rustuv crate will be linked by default
(providing a default implementation of the event loop) via a similar method to
injecting a dependency on libstd. This is currently the only location where
the rustuv crate is ever linked.
* There is a new #[no_uv] attribute (implied by #[no_std]) which denies
implicitly linking to rustuv by default
Closes#5019
Primarily this makes the Scheduler and all of its related interfaces public. The
reason for doing this is that currently any extern event loops had no access to
the scheduler at all. This allows third-party event loops to manipulate the
scheduler, along with allowing the uv event loop to live inside of its own
crate.
This drops more of the old C++ runtime to rather be written in rust. A few
features were lost along the way, but hopefully not too many. The main loss is
that there are no longer backtraces associated with allocations (rust doesn't
have a way of acquiring those just yet). Other than that though, I believe that
the rest of the debugging utilities made their way over into rust.
Closes#8704
Some code cleanup, sorting of import blocks
Removed std::unstable::UnsafeArc's use of Either
Added run-fail tests for the new FailWithCause impls
Changed future_result and try to return Result<(), ~Any>.
- Internally, there is an enum of possible fail messages passend around.
- In case of linked failure or a string message, the ~Any gets
lazyly allocated in future_results recv method.
- For that, future result now returns a wrapper around a Port.
- Moved and renamed task::TaskResult into rt::task::UnwindResult
and made it an internal enum.
- Introduced a replacement typedef `type TaskResult = Result<(), ~Any>`.
I'm not entirely sure why this is happening, but the server task is never seeing
the second send of the client task, and this test will very reliably fail to
complete on windows.
It was pretty much a miracle that these tests were ever passing. They would
never have passed in the single threaded case because only one sigint in the
tests is ever generated, but when run in parallel two sigints will be generated.
This drops more of the old C++ runtime to rather be written in rust. A few
features were lost along the way, but hopefully not too many. The main loss is
that there are no longer backtraces associated with allocations (rust doesn't
have a way of acquiring those just yet). Other than that though, I believe that
the rest of the debugging utilities made their way over into rust.
Closes#8704
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------|---------------|--------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------------------------------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
It's not guaranteed that there will always be an event loop to run, and this
implementation will serve as an incredibly basic one which does not provide any
I/O, but allows the scheduler to still run.
cc #9128
This is a peculiar function to require event loops to implement, and it's only
used in one spot during tests right now. Instead, a possibly more robust apis
for timers should be used rather than requiring all event loops to implement a
curious-looking function.
The PausibleIdleCallback must have some handle into the event loop, and because
struct destructors are run in order of top-to-bottom in order of fields, this
meant that the event loop was getting destroyed before the idle callback was
getting destroyed.
I can't confirm that this fixes a problem in how we use libuv, but it does
semantically fix a problem for usage with other event loops.
This adds constructors to pipe streams in the new runtime to take ownership of
file descriptors, and also fixes a few tests relating to the std::run changes
(new errors are raised on io_error and one test is xfail'd).
I was seeing a lot of weird behavior with stdin behaving as a tty, and it
doesn't really quite make sense, so instead this moves to using libuv's pipes
instead (which make more sense for stdin specifically).
This prevents piping input to rustc hanging forever.
The general idea is to remove conditions completely from I/O, so in the meantime
remove the read_error condition to mean the same thing as the io_error condition.
The isn't an ideal patch, and the comment why is in the code. Basically uvio
uses task::unkillable which touches the kill flag for a task, and if the task is
failing due to mismangement of the kill flag, then there will be serious
problems when the task tries to print that it's failing.
When uv's TTY I/O is used for the stdio streams, the file descriptors are put
into a non-blocking mode. This means that other concurrent writes to the same
stream can fail with EAGAIN or EWOULDBLOCK. By all I/O to event-loop I/O, we
avoid this error.
There is one location which cannot move, which is the runtime's dumb_println
function. This was implemented to handle the EAGAIN and EWOULDBLOCK errors and
simply retry again and again.
This involved changing a fair amount of code, rooted in how we access the local
IoFactory instance. I added a helper method to the rtio module to access the
optional local IoFactory. This is different than before in which it was assumed
that a local IoFactory was *always* present. Now, a separate io_error is raised
when an IoFactory is not present, yet I/O is requested.
This removes the PathLike trait associated with this "support module". This is
yet another "container of bytes" trait, so I didn't want to duplicate what
already exists throughout libstd. In actuality, we're going to pass of C strings
to the libuv APIs, so instead the arguments are now bound with the 'ToCStr'
trait instead.
Additionally, a layer of complexity was removed by immediately converting these
type-generic parameters into CStrings to get handed off to libuv apis.
We get a little more functionality from libuv for these kinds of streams (things
like terminal dimentions), and it also appears to more gracefully handle the
stream being a window. Beforehand, if you used stdio and hit CTRL+d on a
process, libuv would continually return 0-length successful reads instead of
interpreting that the stream was closed.
I was hoping to be able to write tests for this, but currently the testing
infrastructure doesn't allow tests with a stdin and a stdout, but this has been
manually tested! (not that it means much)
- Adds the `Sample` and `IndependentSample` traits for generating numbers where there are parameters (e.g. a list of elements to draw from, or the mean/variance of a normal distribution). The former takes `&mut self` and the latter takes `&self` (this is the only difference).
- Adds proper `Normal` and `Exp`-onential distributions
- Adds `Range` which generates `[lo, hi)` generically & properly (via a new trait) replacing the incorrect behaviour of `Rng.gen_integer_range` (this has become `Rng.gen_range` for convenience, it's far more efficient to use `Range` itself)
- Move the `Weighted` struct from `std::rand` to `std::rand::distributions` & improve it
- optimisations and docs
I'm planning on doing more updates, but the section in the tutorial stood out at me since the 'rust' tool no longer exists, this should probably be removed to lessen confusion.
This reifies the computations required for uniformity done by
(the old) `Rng.gen_integer_range` (now Rng.gen_range), so that they can
be amortised over many invocations, if it is called in a loop.
Also, it makes it correct, but using a trait + impls for each type,
rather than trying to coerce `Int` + `u64` to do the right thing. This
also makes it more extensible, e.g. big integers could & should
implement SampleRange.
This commit re-introduces the functionality of __morestack in a way that it was
not originally anticipated. Rust does not currently have segmented stacks,
rather just large stack segments. We do not detect when these stack segments are
overrun currently, but this commit leverages __morestack in order to check this.
This commit purges a lot of the old __morestack and stack limit C++
functionality, migrating the necessary chunks to rust. The stack limit is now
entirely maintained in rust, and the "main logic bits" of __morestack are now
also implemented in rust as well.
I put my best effort into validating that this currently builds and runs successfully on osx and linux 32/64 bit, but I was unable to get this working on windows. We never did have unwinding through __morestack frames, and although I tried poking at it for a bit, I was unable to understand why we don't get unwinding right now.
A focus of this commit is to implement as much of the logic in rust as possible. This involved some liberal usage of `no_split_stack` in various locations, along with some use of the `asm!` macro (scary). I modified a bit of C++ to stop calling `record_sp_limit` because this is no longer defined in C++, rather in rust.
Another consequence of this commit is that `thread_local_storage::{get, set}` must both be flagged with `#[rust_stack]`. I've briefly looked at the implementations on osx/linux/windows to ensure that they're pretty small stacks, and I'm pretty sure that they're definitely less than 20K stacks, so we probably don't have a lot to worry about.
Other things worthy of note:
* The default stack size is now 4MB instead of 2MB. This is so that when we request 2MB to call a C function you don't immediately overflow because you have consumed any stack at all.
* `asm!` is actually pretty cool, maybe we could actually define context switching with it?
* I wanted to add links to the internet about all this jazz of storing information in TLS, but I was only able to find a link for the windows implementation. Otherwise my suggestion is just "disassemble on that arch and see what happens"
* I put my best effort forward on arm/mips to tweak __morestack correctly, we have no ability to test this so an extra set of eyes would be useful on these spots.
* This is all really tricky stuff, so I tried to put as many comments as I thought were necessary, but if anything is still unclear (or I completely forgot to take something into account), I'm willing to write more!
This commit resumes management of the stack boundaries and limits when switching
between tasks. This additionally leverages the __morestack function to run code
on "stack overflow". The current behavior is to abort the process, but this is
probably not the best behavior in the long term (for deails, see the comment I
wrote up in the stack exhaustion routine).
Rewrite the entire `std::path` module from scratch.
`PosixPath` is now based on `~[u8]`, which fixes#7225.
Unnecessary allocation has been eliminated.
There are a lot of clients of `Path` that still assume utf-8 paths.
This is covered in #9639.
...al work
This is causing really awful scheduler behavior where the main thread scheduler is
continually waking up, stealing work, discovering it can't actually run the work,
and sending it off to another scheduler.
No test cases because we don't have suitable instrumentation for it.
Add a new trait BytesContainer that is implemented for both byte vectors
and strings.
Convert Path::from_vec and ::from_str to one function, Path::new().
Remove all the _str-suffixed mutation methods (push, join, with_*,
set_*) and modify the non-suffixed versions to use BytesContainer.
Remove the old path.
Rename path2 to path.
Update all clients for the new path.
Also make some miscellaneous changes to the Path APIs to help the
adoption process.
This is causing really awful scheduler behavior where the main thread scheduler is
continually waking up, stealing work, discovering it can't actually run the work,
and sending it off to another scheduler.
This patch removes the code responsible for handling older CrateMap versions (as discussed during #9593). Only the new (safer) layout is supported now.
This implements a number of the baby steps needed to start eliminating everything inside of `std::io`. It turns out that there are a *lot* of users of that module, so I'm going to try to tackle them separately instead of bringing down the whole system all at once.
This pull implements a large amount of unimplemented functionality inside of `std::rt::io` including:
* Native file I/O (file descriptors, *FILE)
* Native stdio (through the native file descriptors)
* Native processes (extracted from `std::run`)
I also found that there are a number of users of `std::io` which desire to read an input line-by-line, so I added an implementation of `read_until` and `read_line` to `BufferedReader`.
With all of these changes in place, I started to axe various usages of `std::io`. There's a lot of one-off uses here-and-there, but the major use-case remaining that doesn't have a fantastic solution is `extra::json`. I ran into a few compiler bugs when attempting to remove that, so I figured I'd come back to it later instead.
There is one fairly major change in this pull, and it's moving from native stdio to uv stdio via `print` and `println`. Unfortunately logging still goes through native I/O (via `dumb_println`). This is going to need some thinking, because I still want the goal of logging/printing to be 0 allocations, and this is not possible if `io::stdio::stderr()` is called on each log message. Instead I think that this may need to be cached as the `logger` field inside the `Task` struct, but that will require a little more workings to get right (this is also a similar problem for print/println, do we cache `stdout()` to not have to re-create it every time?).