When uv's TTY I/O is used for the stdio streams, the file descriptors are put
into a non-blocking mode. This means that other concurrent writes to the same
stream can fail with EAGAIN or EWOULDBLOCK. By all I/O to event-loop I/O, we
avoid this error.
There is one location which cannot move, which is the runtime's dumb_println
function. This was implemented to handle the EAGAIN and EWOULDBLOCK errors and
simply retry again and again.
We get a little more functionality from libuv for these kinds of streams (things
like terminal dimentions), and it also appears to more gracefully handle the
stream being a window. Beforehand, if you used stdio and hit CTRL+d on a
process, libuv would continually return 0-length successful reads instead of
interpreting that the stream was closed.
I was hoping to be able to write tests for this, but currently the testing
infrastructure doesn't allow tests with a stdin and a stdout, but this has been
manually tested! (not that it means much)
- Adds the `Sample` and `IndependentSample` traits for generating numbers where there are parameters (e.g. a list of elements to draw from, or the mean/variance of a normal distribution). The former takes `&mut self` and the latter takes `&self` (this is the only difference).
- Adds proper `Normal` and `Exp`-onential distributions
- Adds `Range` which generates `[lo, hi)` generically & properly (via a new trait) replacing the incorrect behaviour of `Rng.gen_integer_range` (this has become `Rng.gen_range` for convenience, it's far more efficient to use `Range` itself)
- Move the `Weighted` struct from `std::rand` to `std::rand::distributions` & improve it
- optimisations and docs
OS X 10.9's linker has a bug that results in it failing to preserve
DWARF unwind information when passed the -no_compact_unwind flag.
This flag is passed on OS X because the unwind information for
__morestack cannot be represented by the compact unwind format.
We can work around this problem by using a more targeted approach
to disabling compact unwind information. The OS X linker looks for
a particular pattern in the DWARF unwind information and will not
attempt to convert the unwind information to the compact format.
The pattern in question is the return address register being saved
twice to the same location.
Fixes#6849.
This commit re-introduces the functionality of __morestack in a way that it was
not originally anticipated. Rust does not currently have segmented stacks,
rather just large stack segments. We do not detect when these stack segments are
overrun currently, but this commit leverages __morestack in order to check this.
This commit purges a lot of the old __morestack and stack limit C++
functionality, migrating the necessary chunks to rust. The stack limit is now
entirely maintained in rust, and the "main logic bits" of __morestack are now
also implemented in rust as well.
I put my best effort into validating that this currently builds and runs successfully on osx and linux 32/64 bit, but I was unable to get this working on windows. We never did have unwinding through __morestack frames, and although I tried poking at it for a bit, I was unable to understand why we don't get unwinding right now.
A focus of this commit is to implement as much of the logic in rust as possible. This involved some liberal usage of `no_split_stack` in various locations, along with some use of the `asm!` macro (scary). I modified a bit of C++ to stop calling `record_sp_limit` because this is no longer defined in C++, rather in rust.
Another consequence of this commit is that `thread_local_storage::{get, set}` must both be flagged with `#[rust_stack]`. I've briefly looked at the implementations on osx/linux/windows to ensure that they're pretty small stacks, and I'm pretty sure that they're definitely less than 20K stacks, so we probably don't have a lot to worry about.
Other things worthy of note:
* The default stack size is now 4MB instead of 2MB. This is so that when we request 2MB to call a C function you don't immediately overflow because you have consumed any stack at all.
* `asm!` is actually pretty cool, maybe we could actually define context switching with it?
* I wanted to add links to the internet about all this jazz of storing information in TLS, but I was only able to find a link for the windows implementation. Otherwise my suggestion is just "disassemble on that arch and see what happens"
* I put my best effort forward on arm/mips to tweak __morestack correctly, we have no ability to test this so an extra set of eyes would be useful on these spots.
* This is all really tricky stuff, so I tried to put as many comments as I thought were necessary, but if anything is still unclear (or I completely forgot to take something into account), I'm willing to write more!
This commit resumes management of the stack boundaries and limits when switching
between tasks. This additionally leverages the __morestack function to run code
on "stack overflow". The current behavior is to abort the process, but this is
probably not the best behavior in the long term (for deails, see the comment I
wrote up in the stack exhaustion routine).
As discovered in #9925, it turns out that we weren't using jemalloc on most
platforms. Additionally, on some platforms we were using it incorrectly and
mismatching the libc version of malloc with the jemalloc version of malloc.
Additionally, it's not clear that using jemalloc is indeed a large performance
win in particular situtations. This could be due to building jemalloc
incorrectly, or possibly due to using jemalloc incorrectly, but it is unclear at
this time.
Until jemalloc can be confirmed to integrate correctly on all platforms and has
verifiable large performance wins on platforms as well, it shouldn't be part of
the default build process. It should still be available for use via the
LD_PRELOAD trick on various architectures, but using it as the default allocator
for everything would require guaranteeing that it works in all situtations,
which it currently doesn't.
Closes#9925
This lets the C++ code in the rt handle the (slightly) tricky parts of
random number generation: e.g. error detection/handling, and using the
values of the `#define`d options to the various functions.
The root issue is that dlerror isn't reentrant or even thread safe.
The Windows code isn't affected since errno is thread-local on Windows
and it's running in an atomically block to ensure there isn't a green
thread context switch.
Closes#8156
This adds a large doc-block to the top of the std::logging module explaining how
to use it. This is mostly just making sure that all the information in the
manual's section about logging is also here (in case someone decides to look
into this module first).
This also removes the old console_{on,off} methods. As far as I can tell, the
functions were only used by the compiler, and there's no reason for them to be
used because they're all turned off by default anyway (maybe they were turned on
by default at some point...)
I believe that this is the final nail in the coffin and closes#5021
Some of the functions could be converted to rust, but the functions dealing with
signals were moved to rust_builtin.cpp instead (no reason to keep the original
file around for one function).
Closes#2674
Because less C++ is better C++!
Some of the functions could be converted to rust, but the functions dealing with
signals were moved to rust_builtin.cpp instead (no reason to keep the original
file around for one function).
Closes#2674
This is a re-landing of #8645, except that the bindings are *not* being used to
power std::run just yet. Instead, this adds the bindings as standalone bindings
inside the rt::io::process module.
I made one major change from before, having to do with how pipes are
created/bound. It's much clearer now when you can read/write to a pipe, as
there's an explicit difference (different types) between an unbound and a bound
pipe. The process configuration now takes unbound pipes (and consumes ownership
of them), and will return corresponding pipe structures back if spawning is
successful (otherwise everything is destroyed normally).
std: remove unneeded field from RequestData struct
std: rt::uv::file - map us_fs_stat & start refactoring calls into FsRequest
std: stubbing out stat calls from the top-down into uvio
std: us_fs_* operations are now by-val self methods on FsRequest
std: post-rebase cleanup
std: add uv_fs_mkdir|rmdir + tests & minor test cleanup in rt::uv::file
WORKING: fleshing out FileStat and FileInfo + tests
std: reverting test files..
refactoring back and cleanup...
While usage of change_dir_locked is synchronized against itself, it's not
synchronized against other relative path usage, so I'm of the opinion that it
just really doesn't help in running tests. In order to prevent the problems that
have been cropping up, this completely removes the function.
All existing tests (except one) using it have been moved to run-pass tests where
they get their own process and don't need to be synchronized with anyone else.
There is one now-ignored rustpkg test because when I moved it to a run-pass test
apparently run-pass isn't set up to have 'extern mod rustc' (it ends up having
linkage failures).
This is a reopening of the libuv-upgrade part of #8645. Hopefully this won't
cause random segfaults all over the place. The windows regression in testing
should also be fixed (it shouldn't build the whole compiler twice).
A notable difference from before is that gyp is now a git submodule instead of
always git-cloned at make time. This allows bundling for releases more easily.
Closes#8850
There were two main differences with the old libuv and the master version:
1. The uv_last_error function is now gone. The error code returned by each
function is the "last error" so now a UvError is just a wrapper around a
c_int.
2. The repo no longer includes a makefile, and the build system has change.
According to the build directions on joyent/libuv, this now downloads a `gyp`
program into the `libuv/build` directory and builds using that. This
shouldn't add any dependences on autotools or anything like that.
Closes#8407Closes#6567Closes#6315
This patch saves and restores win64's nonvolatile registers.
This patch also saves stack information of thread environment
block (TEB), which is at %gs:0x08 and %gs:0x10.
This resolves issue #908.
Notable changes:
- On Windows, LLVM integrated assembler emits bad stack unwind tables when segmented stacks are enabled. However, unwind info directives in the assembly output are correct, so we generate assembly first and then run it through an external assembler, just like it is already done for Android builds.
- Linker is invoked via "g++" command instead of "gcc": g++ passes the appropriate magic parameters to the linker, which ensure correct registration of stack unwind tables in dynamic libraries.
libuv handles are tied to the event loop that created them. In order to perform IO, the handle must be on the thread with its home event loop. Thus, when as task wants to do IO it must first go to the IO handle's home event loop and pin itself to the corresponding scheduler while the IO action is in flight. Once the IO action completes, the task is unpinned and either returns to its home scheduler if it is a pinned task, or otherwise stays on the current scheduler.
Making new blocking IO implementations (i.e. files) thread safe is rather simple. Add a home field to the IO handle's struct in uvio and implement the HomingIO trait. Wrap every IO call in the HomingIO.home_for_io method, which will take care of the scheduling.
I'm not sure if this remains thread safe in the presence of asynchronous IO at the libuv level. If we decide to do that, then this set up should be revisited.
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
This PR fixes#7235 and #3371, which removes trailing nulls from `str` types. Instead, it replaces the creation of c strings with a new type, `std::c_str::CString`, which wraps a malloced byte array, and respects:
* No interior nulls
* Ends with a trailing null
Implements various missing tcp & udp methods.. Also fixes handling ipv4-mapped/compatible ipv6 addresses and addresses the XXX on `status_to_maybe_uv_error`.
r? @brson
cc #6004 and #3273
This is a rewrite of TLS to get towards not requiring `@` when using task local storage. Most of the rewrite is straightforward, although there are two caveats:
1. Changing `local_set` to not require `@` is blocked on #7673
2. The code in `local_pop` is some of the most unsafe code I've written. A second set of eyes should definitely scrutinize it...
The public-facing interface currently hasn't changed, although it will have to change because `local_data::get` cannot return `Option<T>`, nor can it return `Option<&T>` (the lifetime isn't known). This will have to be changed to be given a closure which yield `&T` (or as an Option). I didn't do this part of the api rewrite in this pull request as I figured that it could wait until when `@` is fully removed.
This also doesn't deal with the issue of using something other than functions as keys, but I'm looking into using static slices (as mentioned in the issues).
r? @graydon, @nikomatsakis, @pcwalton, or @catamorphism
Sorry this is so huge, but it's been accumulating for about a month. There's lots of stuff here, mostly oriented toward enabling multithreaded scheduling and improving compatibility between the old and new runtimes. Adds task pinning so that we can create the 'platform thread' in servo.
[Here](e1555f9b56/src/libstd/rt/mod.rs (L201)) is the current runtime setup code.
About half of this has already been reviewed.
* stop using an atomic counter, this has a significant cost and
valgrind will already catch these leaks
* remove the extra layer of function calls
* remove the assert of non-null in free, freeing null is well defined
but throwing a failure from free will not be
* stop initializing the `prev`/`next` pointers
* abort on out-of-memory, failing won't necessarily work
This avoids the following pathological scenario that makes threadring OOM:
1) task calls C using fast_ffi, borrowing a big stack from the scheduler.
2) task returns from C and places the big stack on the task-local stack segment list
3) task calls further Rust functions that require growing the stack, and for this reuses the big stack
4) task yields, failing to return the big stack to the scheduler.
5) repeat 500+ times and OOM
Conflicts:
src/rt/rust_task.cpp
This sets the `get_tydesc()` return type correctly and removes the intrinsic module. See #3730, #3475.
Update: this now also removes the unused shape fields in tydescs.