I hereby declare that messages sent from the same source arrive in order (Issue #2605)
Removing FIXME, owned is the correct type here. (Issue #2704)
Remove outdated FIXME (Issue #2703)
Updating test for spawning native functions (Issue #2602)
Removing bogus FIXME (Issue #2599)
- we now have two interfaces for the TCP/IP server/listener workflow,
based on different user approaches surrounding how to deal with the
flow of accept a new tcp connection:
1. the "original" API closely mimics the low-level libuv API, in that we
have an on_connect_cb that the user provides *that is ran on the libuv
thread*. In this callback, the user can accept() a connection, turning it
into a tcp_socket.. of course, before accepting, they have the option
of passing it to a new task, provided they *make the cb block until
the accept is done* .. this is because, in libuv, you have to do the
uv_accept call in the span of that on_connect_cb callback that gets fired
when a new connection comes in. thems the breaks..
I wanted to just get rid of this API, because the general proposition of
users always running code on the libuv thread sounds like an invitation
for many future headaches. the API restriction to have to choose to
immediately accept a connection (and allow the user to block libuv as
needed) isn't too bad for power users who could conceive of circumstances
where they would drop an incoming TCP connection and know what they're
doing, in general.
but as a general API, I thought this was a bit cumbersome, so I ended up
devising..
2. an API that is initiated with a call to `net::tcp::new_listener()` ..
has a similar signature to `net::tcp::listen()`, except that is just
returns an object that sort of behaves like a `comm::port`. Users can
block on the `tcp_conn_port` to receive new connections, either in the
current task or in a new task, depending on which API route they take
(`net::tcp::conn_recv` or `net::tcp::conn_recv_spawn` respectively).. there
is also a `net::tcp::conn_peek` function that will do a peek on the
underlying port to see if there are pending connections.
The main difference, with this API, is that the low-level libuv glue is
going to *accept every connection attempt*, along with the overhead that
that brings. But, this is a much more hassle-free API for 95% of use
cases and will probably be the one that most users will want to reach for.
I need these in the context of doing various malloc/free operations for
libuv structs that need to live in the heap, because of API workflow
(there's no stack to put them in). This has cropped up several times
when impl'ing the high-level API for things like timers, but I've decided
to take the plunge and use this approach for the net::tcp module.
Technically, this can be avoided by spawning a new
task that contains the needed memory structures on its stack and then
having it block for the duration of the time we need that memory to be
valid (this is what I did in std::timer). Exposing this API provides a
much lower overhead way to address
the issue, albeit with safety concerns. The main mitigation policy should
be to use malloc/free with libuv handles only when the handles, are then
associated with a resource or class-with-dtor. So we have a finite lifetime
for the object and can gaurantee a free(), barring a runtime crash (in
which case you have bigger problems!)
.. seeing an occasional valgrind/barf spew on some invalid read/writes..
need to investigate further.. i think its related to my poor citizen
conduct, re: pointers stashed in rust_kernel..
The runtime is in an uncertain state here and, instead of thinking
about how to make the logger work correctly, let's just avoid it.
Currently, it ends up hitting an assert saying that we can't log on
the rust stack.
.. up next: windows!
.. impl'd uv::direct::read_stop() and uv::direct::close() to wrap things up
.. demonstrated sending data out of the uv_read_cb via a channel (which
we block on to recv all of it, complete w/ EOF notification) that is
read from after the loop exits.
.. helpers to read the guts of a uv_buf_t
.. an idea im kicking around: starting to pile up all of these hideous
data accessor functions in uv::direct .. I might make impl/iface pairs
for the various uv_* types that I'm using, in order to encapsulate those
data access functions and, perhaps, make the access look a little cleaner
(it still won't be straight field access, but it'll be a lot better)
.. formatting cleanup to satisfy make check
so we're now adhering the libuv C api and passing structs by-val where
it is expected, instead of pulling pointer trickery (or worse having to
malloc structs in c++ to be passed back to rust and then into C again)
have to use ++ sigil in rust-side extern fn decls in order to have rust
actually copy the struct, by value, onto the C stack. gotcha, indeed.
also adding a helper method to verify/remind how to pass a struct by-val
into C... check out the rust fn sig for rust_uv_ip4_test_verify_port_val()
for more infos
.. but passing sockaddr_in by val back to C is broken, still passing by
ptr
.. the uv_write_cb is processed, but we have a status -1.. there is
also valgrind spew.. so buf passing is broken, still.
lots of changes, here.. should've commited sooner.
- added uv::direct module that contains rust fns that map, neatly, to
the libuv c library as much as possible. they operate on ptrs to libuv
structs mapped in rust, as much as possible (there are some notable
exceptions). these uv::direct fns should only take inputs from rust and,
as neccesary, translate them into C-friendly types and then pass to the
C functions. We want to them to return ints, as the libuv functions do,
so we can start tracking status.
- the notable exceptions for structs above is due to ref gh-1402, which
prevents us from passing structs, by value, across the Rust<->C barrier
(they turn to garbage, pretty much). So in the cases where we get back
by-val structs from C (uv_buf_init(), uv_ip4_addr(), uv_err_t in callbacks)
, we're going to use *ctypes::void (or just errnum ints for uv_err_t) until
gh-1402 is resolved.
- using crust functions, in these uv::direct fns, for callbacks from libuv,
will eschew uv_err_t, if possible, in favor a struct int.. if at all
possible (probably isn't.. hm.. i know libuv wants to eventually move to
replace uv_err_t with an int, as well.. so hm).
- started flushing out a big, gnarly test case to exercise the tcp request
side of the uv::direct functions. I'm at the point where, after the
connection is established, we write to the stream... when the writing is
done, we will read from it, then tear the whole thing down.
overall, it turns out that doing "close to the metal" interaction with
c libraries is painful (and more chatty) when orchestrated from rust. My
understanding is that not much, at all, is written in this fashion in the
existant core/std codebase.. malloc'ing in C has been preferred, from what
I've gathered. So we're treading new ground, here!
It's possible to have negative times if expressing time before 1970, so
we should use signed types. Other platforms can return times at a higher
resolution, so we should use 64 bits.
This is a workaround for #1815. libev uses realloc(0) to
free the loop, which valgrind doesn't like. We have suppressions
to make valgrind ignore them.
Valgrind also has a sanity check when collecting allocation backtraces
that the stack pointer must be at least 512 bytes into the stack (at
least 512 bytes of frames must have come before). When this is not
the case it doesn't collect the backtrace.
Unfortunately, with our spaghetti stacks that valgrind check triggers
sometimes and we don't get the backtrace for the realloc(0), it
fails to be suppressed, and it gets reported as 0 bytes lost
from a malloc with no backtrace.
This fixes the issue by alloca'ing 512 bytes before calling uv_loop_delete
Many changes to code structure are included:
- removed TIME_SLICE_IN_MS
- removed sychronized_indexed_list
- removed region_owned
- kernel_owned move to kernel.h, task_owned moved to task.h
- global configs moved to rust_globals.h
- changed #pragma once to standard guard in rust_upcall.h
- got rid of memory.h
Previously two methods existed: rust_sched_loop::get_task and rust_task::get_task_from_tcb. Merge both of them into one, trying the faster one (tcb) first, and if that fails, the slower one from the tls.
Adds back the ability to make assertions about locks, but only under the
--enable-debug configuration
This reverts commit b247de6458.
Conflicts:
src/rt/rust_sched_loop.cpp
This just moves the responsibility for joining with scheduler threads
off to a worker thread. This will be needed when we allow tasks to be
scheduled on the main thread.
rust_sched_launcher is actually responsible for setting up the thread and
starting the loop. There will be other implementations that do not actually
set up a new thread, in order to support scheduling tasks on the main OS
thread.
Remove the random context from rust_scheduler and use a simple round robin system to choose which thread a new task gets put on. Also, some incorrect tab indents around scoped blocks were fixed.