It turns out that gyp (libuv's new build system) wants x64 for a 64-bit x86
architecture and ia32 for a 32-bit architecture, so this performs the relevant
mapping and then invokes libuv's configure script with the appropriate target
architecture.
This can be verified by running make with VERBOSE=1 and seeing that beforehand
on a 64-bit build libuv was passed "-arch i386" and now it's passed
"-arch x86_64"
Closes#8826
The syntax of the script requires python < 3, and so does our build system so we
can just use CFG_PYTHON to run the script. This prevents build failures where
`python` is actually python3 or later.
There were two main differences with the old libuv and the master version:
1. The uv_last_error function is now gone. The error code returned by each
function is the "last error" so now a UvError is just a wrapper around a
c_int.
2. The repo no longer includes a makefile, and the build system has change.
According to the build directions on joyent/libuv, this now downloads a `gyp`
program into the `libuv/build` directory and builds using that. This
shouldn't add any dependences on autotools or anything like that.
Closes#8407Closes#6567Closes#6315
Apparently the standard --build and --host flags don't actually
_do_ anything. This re-uses the libuv flags, since they are the
same for getting jemalloc to cross-compile
This lets us use #ifdefs to determine which stage of the build we happen
to be in, which is useful in the event we need to make changes to rustrt
that are incompatible with the code generated by stage0.
This should help pave the way to completing #6575, which will likely
require changes to type signatures for spawn_fn & glue_fn in rustrt.
It uses the private field of TCB head to store stack limit. I tested on my Raspberry PI. A simple hello world program ran without any problem. However, for a more complex program, it segfaulted as #6231.
- thanks to work in libuv's upstream, we can call libuv's Makefile directly
with parameters, instead of descending in gyp-uv madness and generating
our own.
Safe points are exported in a per-module list via the crate map. A C
runtime call walks the crate map at startup and aggregates the list of
safe points for the program.
Currently the GC doesn't actually deallocate memory on malloc and
free. Adding the GC at this stage is primarily of testing value.
The GC does attempt to clean up exchange heap and stack-allocated
resource on failure.
A result of this patch is that the user now needs to be careful about
what code they write in destructors, because the GC and/or failure
cleanup may need to call destructors. Specifically, calls to malloc
are considered unsafe and may result in infinite loops or segfaults.
This just moves the responsibility for joining with scheduler threads
off to a worker thread. This will be needed when we allow tasks to be
scheduled on the main thread.
rust_sched_launcher is actually responsible for setting up the thread and
starting the loop. There will be other implementations that do not actually
set up a new thread, in order to support scheduling tasks on the main OS
thread.
I was still having issues with the build system somehow getting confused
as to which set of valgrind headers to use when compiling rt.
This commit moves all the valgrind headers into their own directory
under rt and makes the usage more consistent. The compiler is now passed
the -DNVALGRIND flag when valgrind is not installed, as opposed to
passing -DHAVE_VALGRIND.
We also pass -I src/rt to the compiler when building rt so you can more
easily import what you want. I also cleaned up some erroneous #includes
along the way.
It should be safe to always just import the local valgrind headers and use
them without question. NVALGRIND turns the operations to no-ops when it
is active, and the build and tests run cleanly with or without.