Fixes#9882
Note that the actual checking code is inside a if false in order to compile libstd properly.
libstd uses asm! in rt. If we put ```#[feature(asm)]``` in libstd, it fails to build at stage0 beacause the
asm feature is not known yet by the snapshot compiler.
We must wait that this PR arrives into the snapshot in order to actually activate the checking code.
This commit re-introduces the functionality of __morestack in a way that it was
not originally anticipated. Rust does not currently have segmented stacks,
rather just large stack segments. We do not detect when these stack segments are
overrun currently, but this commit leverages __morestack in order to check this.
This commit purges a lot of the old __morestack and stack limit C++
functionality, migrating the necessary chunks to rust. The stack limit is now
entirely maintained in rust, and the "main logic bits" of __morestack are now
also implemented in rust as well.
I put my best effort into validating that this currently builds and runs successfully on osx and linux 32/64 bit, but I was unable to get this working on windows. We never did have unwinding through __morestack frames, and although I tried poking at it for a bit, I was unable to understand why we don't get unwinding right now.
A focus of this commit is to implement as much of the logic in rust as possible. This involved some liberal usage of `no_split_stack` in various locations, along with some use of the `asm!` macro (scary). I modified a bit of C++ to stop calling `record_sp_limit` because this is no longer defined in C++, rather in rust.
Another consequence of this commit is that `thread_local_storage::{get, set}` must both be flagged with `#[rust_stack]`. I've briefly looked at the implementations on osx/linux/windows to ensure that they're pretty small stacks, and I'm pretty sure that they're definitely less than 20K stacks, so we probably don't have a lot to worry about.
Other things worthy of note:
* The default stack size is now 4MB instead of 2MB. This is so that when we request 2MB to call a C function you don't immediately overflow because you have consumed any stack at all.
* `asm!` is actually pretty cool, maybe we could actually define context switching with it?
* I wanted to add links to the internet about all this jazz of storing information in TLS, but I was only able to find a link for the windows implementation. Otherwise my suggestion is just "disassemble on that arch and see what happens"
* I put my best effort forward on arm/mips to tweak __morestack correctly, we have no ability to test this so an extra set of eyes would be useful on these spots.
* This is all really tricky stuff, so I tried to put as many comments as I thought were necessary, but if anything is still unclear (or I completely forgot to take something into account), I'm willing to write more!
This commit resumes management of the stack boundaries and limits when switching
between tasks. This additionally leverages the __morestack function to run code
on "stack overflow". The current behavior is to abort the process, but this is
probably not the best behavior in the long term (for deails, see the comment I
wrote up in the stack exhaustion routine).
Beforehand the id of a method was the id of the 'self' argument, but this is not
the id which privacy was using (the id of the ast::method) struct, so by moving
the ids over to the privacy-target ones the methods are now stripped correctly.
Recently, the float type, and the rust and rusti tools have been removed from master.
float replaced by f64 in code examples, removed mentions of float, f suffix, rust and rusti in explanations.
(+ some stupid things like rust -> Rust)
r? @alexcrichton On most platforms, the time granularity is 1 sec or more, so comparing
dates in tests that check whether rebuilding did or didn't happen leads
to spurious failures. Instead, test the same thing by making an output
file read-only and trapping attempts to write to it.
Instead of scrutinizing modification times in rustpkg tests,
change output files to be read-only and detect attempts to write
to them (hack suggested by Jack). This avoids time granularity problems.
As part of this change, I discovered that some dependencies weren't
getting written correctly (involving built executables and library
files), so this patch fixes that too.
This partly addresses #9441, but one test (test_rebuild_when_needed)
is still ignored on Linux.
Beforehand the id of a method was the id of the 'self' argument, but this is not
the id which privacy was using (the id of the ast::method) struct, so by moving
the ids over to the privacy-target ones the methods are now stripped correctly.
This stops labeling everything as "is private" when in fact the destination may
be public. Instead, the clause "is inaccessible" is used and the private part of
the flag is called out with a "is private" message.
Closes#9793
As discovered in #9925, it turns out that we weren't using jemalloc on most
platforms. Additionally, on some platforms we were using it incorrectly and
mismatching the libc version of malloc with the jemalloc version of malloc.
Additionally, it's not clear that using jemalloc is indeed a large performance
win in particular situtations. This could be due to building jemalloc
incorrectly, or possibly due to using jemalloc incorrectly, but it is unclear at
this time.
Until jemalloc can be confirmed to integrate correctly on all platforms and has
verifiable large performance wins on platforms as well, it shouldn't be part of
the default build process. It should still be available for use via the
LD_PRELOAD trick on various architectures, but using it as the default allocator
for everything would require guaranteeing that it works in all situtations,
which it currently doesn't.
Closes#9925
As discovered in #9925, it turns out that we weren't using jemalloc on most
platforms. Additionally, on some platforms we were using it incorrectly and
mismatching the libc version of malloc with the jemalloc version of malloc.
Additionally, it's not clear that using jemalloc is indeed a large performance
win in particular situtations. This could be due to building jemalloc
incorrectly, or possibly due to using jemalloc incorrectly, but it is unclear at
this time.
Until jemalloc can be confirmed to integrate correctly on all platforms and has
verifiable large performance wins on platforms as well, it shouldn't be part of
the default build process. It should still be available for use via the
LD_PRELOAD trick on various architectures, but using it as the default allocator
for everything would require guaranteeing that it works in all situtations,
which it currently doesn't.
Closes#9925
Previously an ExprLit was created *per byte* causing a huge increase in memory
bloat. This adds a new `lit_binary` to contain a literal of binary data, which
is currently only used by the include_bin! syntax extension. This massively
speeds up compilation times of the shootout-k-nucleotide-pipes test
before:
time: 469s
memory: 6GB
assertion failure in LLVM (section too large)
after:
time: 2.50s
memory: 124MB
Closes#2598
Previously an ExprLit was created *per byte* causing a huge increase in memory
bloat. This adds a new `lit_binary` to contain a literal of binary data, which
is currently only used by the include_bin! syntax extension. This massively
speeds up compilation times of the shootout-k-nucleotide-pipes test
before:
time: 469s
memory: 6GB
assertion failure in LLVM (section too large)
after:
time: 2.50s
memory: 124MB
Closes#2598