Refactor away resolve_name_in_module in resolve_imports.rs
Rewrite and improve the core name resolution procedure in NameResolution::result and Module::resolve_name
Refactor the duplicate checking code into NameResolution::try_define
Both of these targets have jemalloc disabled unconditionally right now, so using
`maybe_jemalloc` here isn't right. This fixes the case where a Linux compiler
(which is itself configured to use jemalloc) attempts to cross-compile to MinGW,
causing it to try to find an `alloc_jemalloc` crate (and failing).
Both of these targets have jemalloc disabled unconditionally right now, so using
`maybe_jemalloc` here isn't right. This fixes the case where a Linux compiler
(which is itself configured to use jemalloc) attempts to cross-compile to MinGW,
causing it to try to find an `alloc_jemalloc` crate (and failing).
Also update the instability reason to include a note about a possible
bad interaction with condition variables on systems that allow
waiting on a RwLock guard.
Here's another go at adding emscripten support. This needs to wait again on new [libc definitions](https://github.com/rust-lang-nursery/libc/pull/122) landing. To get the libc definitions right I had to add support for i686-unknown-linux-musl, which are very similar to emscripten's, which are derived from arm/musl.
This branch additionally removes the makefile dependency on the `EMSCRIPTEN` environment variable by not building the unused compiler-rt.
Again, this is not sufficient for actually compiling to asmjs since it needs additional LLVM patches.
r? @alexcrichton
This tells trans:🔙:write not to LLVM codegen to create .o
files but to put LLMV bitcode in .o files.
Emscripten's emcc supports .o in this format, and this is,
I think, slightly easier than making rlibs work without .o
files.
Backtraces, and the compilation of libbacktrace for asmjs, are disabled.
This port doesn't use jemalloc so, like pnacl, it disables jemalloc *for all targets*
in the configure file.
It disables stack protection.
It could return in the future if it returned a different guard type, which
could not be used with Condvar, otherwise it is unsafe as another thread
can invalidate an "inner" reference during a Condvar::wait.
cc #27746
These commits finish up closing out https://github.com/rust-lang/rust/issues/24237 by filling out all locations we create new file descriptors with variants that atomically create the file descriptor and set CLOEXEC where possible. Previous support for doing this in `File::open` was added in #27971 and support for `try_clone` was added in #27980. This commit fills out:
* `Socket::new` now passes `SOCK_CLOEXEC`
* `Socket::accept` now uses `accept4`
* `pipe2` is used instead of `pipe`
Unfortunately most of this support is Linux-specific, and most of it is post-2.6.18 (our oldest supported version), so all of the detection here is done dynamically. It looks like OSX does not have equivalent variants for these functions, so there's nothing more we can do there. Support for BSDs can be added over time if they also have these functions.
Closes#24237
Issue #31109 uncovered two semi-related problems:
* A panic in `str::parse::<f64>`
* A panic in `rustc::middle::const_eval::lit_to_const` where the result of float parsing was unwrapped.
This series of commits fixes both issues and also drive-by-fixes some things I noticed while tracking down the parsing panic.
Abort on stack overflow instead of re-raising SIGSEGV
We use guard pages that cause the process to abort to protect against
undefined behavior in the event of stack overflow. We have a handler
that catches segfaults, prints out an error message if the segfault was
due to a stack overflow, then unregisters itself and returns to allow
the signal to be re-raised and kill the process.
This caused some confusion, as it was unexpected that safe code would be
able to cause a segfault, while it's easy to overflow the stack in safe
code. To avoid this confusion, when we detect a segfault in the guard
page, abort instead of the previous behavior of re-raising SIGSEGV.
To test this, we need to adapt the tests for segfault to actually check
the exit status. Doing so revealed that the existing test for segfault
behavior was actually invalid; LLVM optimizes the explicit null pointer
reference down to an illegal instruction, so the program aborts with
SIGILL instead of SIGSEGV and the test didn't actually trigger the
signal handler at all. Use a C helper function to get a null pointer
that LLVM can't optimize away, so we get our segfault instead.
This is a [breaking-change] if anyone is relying on the exact signal
raised to kill a process on stack overflow.
Closes#31273
Also update the instability reason to include a note about a possible
bad interaction with condition variables on systems that allow
waiting on a RwLock guard.
We use guard pages that cause the process to abort to protect against
undefined behavior in the event of stack overflow. We have a handler
that catches segfaults, prints out an error message if the segfault was
due to a stack overflow, then unregisters itself and returns to allow
the signal to be re-raised and kill the process.
This caused some confusion, as it was unexpected that safe code would be
able to cause a segfault, while it's easy to overflow the stack in safe
code. To avoid this confusion, when we detect a segfault in the guard
page, abort instead of the previous behavior of re-raising the SIGSEGV.
To test this, we need to adapt the tests for segfault to actually check
the exit status. Doing so revealed that the existing test for segfault
behavior was actually invalid; LLVM optimizes the explicit null pointer
reference down to an illegal instruction, so the program aborts with
SIGILL instead of SIGSEGV and the test didn't actually trigger the
signal handler at all. Use a C helper function to get a null pointer
that LLVM can't optimize away, so we get our segfault instead.
This is a [breaking-change] if anyone is relying on the exact signal
raised to kill a process on stack overflow.
Closes#31273
The scope of these refactorings is a little bit bigger than the title implies. See each commit for details.
I’m submitting this for nitpicking now (the first 4 commits), because I feel the basic idea/implementation is sound and should work. I will eventually expand this PR to cover the translator changes necessary for all this to work (+ tests), ~~and perhaps implement a dynamic dropping scheme while I’m at it as well.~~
r? @nikomatsakis
This commit attempts to use the `pipe2` syscall on Linux to atomically set the
CLOEXEC flag for pipes created. Unfortunately this was added in 2.6.27 so we
have to dynamically determine whether we can use it or not.
This commit also updates the `fds-are-cloexec.rs` test to test stdio handles for
spawned processes as well.
This is necessary to atomically accept a socket and set the CLOEXEC flag at the
same time. Support only appeared in Linux 2.6.28 so we have to dynamically
determine which syscall we're supposed to call in this case.
Right now we only attempt to call one symbol which my not exist everywhere,
__pthread_get_minstack, but this pattern will come up more often as we start to
bind newer functionality of systems like Linux.
Take a similar strategy as the Windows implementation where we use `dlopen` to
lookup whether a symbol exists or not.
This commit adds support for creating sockets with the `SOCK_CLOEXEC` flag.
Support for this flag was added in Linux 2.6.27, however, and support does not
exist on platforms other than Linux. For this reason we still have the same
fallback as before but just special case Linux if we can.
Similar to the previous commit, if `F_DUPFD_CLOEXEC` succeeds then there's no
need for us to then call `set_cloexec` on platforms other than Linux. The bug
mentioned of kernels not actually setting the `CLOEXEC` flag has only been
repored on Linux, not elsewhere.
On Linux we have to do this for binary compatibility with 2.6.18, but for other
OSes (e.g. OSX/BSDs/etc) they all support this flag so we don't need to pass it.
This change also modifies the dep graph infrastructure to keep track of the number of active tasks, so that even if we are not building the full dep-graph, we still get assertions when there is no active task and one does something that would add a read/write edge. This is particularly helpful since, if the assertions are *not* active, you wind up with the error happening in the message processing thread, which is too late to know the correct backtrace.
~~Before landing, I need to do some performance measurements. Those are underway.~~
See measurements below. No real effect on time.
r? @michaelwoerister