- change port of tcp server test in uv_ll to avoid conflict w/ test in
net::tcp
- a few places the tcp::read fn is used in test w/ a timeout.. suspend
use of the timeout from here on out.
* there are a few places where I was experimenting w/ using `alt` in places
where `if`/`else` would've sufficed. don't drink the koolaid!
* I had an unneeded `else` structure (the `if` branch that preceeded
concluded with a `fail` statement.. I added the `fail` later in the dev
cycle for this branch, so I forgot to remove the `else` after doing so)
* consistent wrt `prop_name: value` vs. `prop_name : value` in record decl
and initialization
* change an `alt` exp on an `ip_addr` to actually be exhaustive,
instead of using a catch-all clause
.. this test fails frequently, locally, when ran with the batch of other
global_loop tests running due to how valgrind deals with multithreading
in the test app. not sure what to do, here.
- we now have two interfaces for the TCP/IP server/listener workflow,
based on different user approaches surrounding how to deal with the
flow of accept a new tcp connection:
1. the "original" API closely mimics the low-level libuv API, in that we
have an on_connect_cb that the user provides *that is ran on the libuv
thread*. In this callback, the user can accept() a connection, turning it
into a tcp_socket.. of course, before accepting, they have the option
of passing it to a new task, provided they *make the cb block until
the accept is done* .. this is because, in libuv, you have to do the
uv_accept call in the span of that on_connect_cb callback that gets fired
when a new connection comes in. thems the breaks..
I wanted to just get rid of this API, because the general proposition of
users always running code on the libuv thread sounds like an invitation
for many future headaches. the API restriction to have to choose to
immediately accept a connection (and allow the user to block libuv as
needed) isn't too bad for power users who could conceive of circumstances
where they would drop an incoming TCP connection and know what they're
doing, in general.
but as a general API, I thought this was a bit cumbersome, so I ended up
devising..
2. an API that is initiated with a call to `net::tcp::new_listener()` ..
has a similar signature to `net::tcp::listen()`, except that is just
returns an object that sort of behaves like a `comm::port`. Users can
block on the `tcp_conn_port` to receive new connections, either in the
current task or in a new task, depending on which API route they take
(`net::tcp::conn_recv` or `net::tcp::conn_recv_spawn` respectively).. there
is also a `net::tcp::conn_peek` function that will do a peek on the
underlying port to see if there are pending connections.
The main difference, with this API, is that the low-level libuv glue is
going to *accept every connection attempt*, along with the overhead that
that brings. But, this is a much more hassle-free API for 95% of use
cases and will probably be the one that most users will want to reach for.
.. turns out that, without the export, the modules aren't accessible
outside of the crate, itself. I thought that, by importing some module
into another (nesting it) and exporting from that nested module (which
is, itself, exported from std.rc) that my mod would be in the build
artifact. This doesn't appear to be the case. learning is fun!
also whitespace cleanup
.. for now, the test just spins up the server and listens for messages,
echoing them back to an output port. there's a "kill" msg that it will
listen for. need to point the tcp client and server test impls at each
other for a loopback server/client test, like how its done in uv::ll
once ipv6 parse/format lands, i can add another test using the entirely
same codebase, but substituting an ip_addr ipv6 varient for the ipv4
varient used in the existing code
still need some other plumbing to get the client/server tests to work
together.
still need implementation for parsing/output formatting and (perhaps?)
representation (for now, i just followef the ipv4 variant's lead and
am representing it as a tuple of 8x u16).
parsing an ipv6 addr is way more complex than parsing an ipv4 addr, so
i'm putting off an implementation here, for now.
candidate solutions:
- could use getaddrinfo() (exists on both POSIX and windows), but with
incompatible fn signatures.
- libuv has a way to parse an ipv6 string into
a sockaddr_in6, but it also requires a port, so it's probably not aprop
for ip_addr
they're changed into a net::tcp::tcp_err_data record, for now. once the
scope of possible tcp errors, from libuv, is established ill create an
err type for each one and return those where they might occur
* tweaked the layout of sockaddr_in6 struct in anticipation of future use
* changed several uv:ll fn signatures to use generics and be more flexible
with ptr types they get passed
* add uv_err_data and a help fn to return it.. packages up err_name and
err_msg info from uv_get_last_error() stuff..
The old way was inconsistent---the head was unboxed but the
tail was boxed. This resulted in numerous needless copies and
also made the borrow check unhappy, because the head tended to be
stored in mutable memory.
seems to hold up pretty well.
uv::hl API is affected.. had to do work on tests and std::timer code that
leverages the global loop/high_level_loop API.
see test_stress_gl_uv_global_loop_high_level_global_timer for a stress
example.. it takes a while to run, but it exits cleanly (something I could
never accomplish with earlier iterations of the global loop)
.. leveraging std::uv, we have:
timer::delayed_send - send a value over a provided channel after the
timeout has passed
timer::sleep - block the current task for the specified period
both of these fns (and everything that goes in timer.rs) leverage the
uv_timer_* API
.. fixes issue, in previous commit, with global loop test hanging on
32bit linux (this was because the struct was too small, so (presumably),
the data member was garbled.. yippy)
- moved global loop tests, as well.. will add tests in uv_hl that encompass
rolling your own high_level_loop via uv::hl::run_high_level_loop()
- also whitespace cleanups and misc warning cleanup..
- doesn't work on 32bit linux
.. seeing an occasional valgrind/barf spew on some invalid read/writes..
need to investigate further.. i think its related to my poor citizen
conduct, re: pointers stashed in rust_kernel..
- starting/stoping the loop based on client work is functioning, correctly
- the issue appears to be that, when the process is about to exit, the
signal to let weak tasks know that they need to exit isn't getting fired.
.. up next: windows!
.. impl'd uv::direct::read_stop() and uv::direct::close() to wrap things up
.. demonstrated sending data out of the uv_read_cb via a channel (which
we block on to recv all of it, complete w/ EOF notification) that is
read from after the loop exits.
.. helpers to read the guts of a uv_buf_t
.. an idea im kicking around: starting to pile up all of these hideous
data accessor functions in uv::direct .. I might make impl/iface pairs
for the various uv_* types that I'm using, in order to encapsulate those
data access functions and, perhaps, make the access look a little cleaner
(it still won't be straight field access, but it'll be a lot better)
.. formatting cleanup to satisfy make check
so we're now adhering the libuv C api and passing structs by-val where
it is expected, instead of pulling pointer trickery (or worse having to
malloc structs in c++ to be passed back to rust and then into C again)
have to use ++ sigil in rust-side extern fn decls in order to have rust
actually copy the struct, by value, onto the C stack. gotcha, indeed.
also adding a helper method to verify/remind how to pass a struct by-val
into C... check out the rust fn sig for rust_uv_ip4_test_verify_port_val()
for more infos
.. but passing sockaddr_in by val back to C is broken, still passing by
ptr
.. the uv_write_cb is processed, but we have a status -1.. there is
also valgrind spew.. so buf passing is broken, still.
lots of changes, here.. should've commited sooner.
- added uv::direct module that contains rust fns that map, neatly, to
the libuv c library as much as possible. they operate on ptrs to libuv
structs mapped in rust, as much as possible (there are some notable
exceptions). these uv::direct fns should only take inputs from rust and,
as neccesary, translate them into C-friendly types and then pass to the
C functions. We want to them to return ints, as the libuv functions do,
so we can start tracking status.
- the notable exceptions for structs above is due to ref gh-1402, which
prevents us from passing structs, by value, across the Rust<->C barrier
(they turn to garbage, pretty much). So in the cases where we get back
by-val structs from C (uv_buf_init(), uv_ip4_addr(), uv_err_t in callbacks)
, we're going to use *ctypes::void (or just errnum ints for uv_err_t) until
gh-1402 is resolved.
- using crust functions, in these uv::direct fns, for callbacks from libuv,
will eschew uv_err_t, if possible, in favor a struct int.. if at all
possible (probably isn't.. hm.. i know libuv wants to eventually move to
replace uv_err_t with an int, as well.. so hm).
- started flushing out a big, gnarly test case to exercise the tcp request
side of the uv::direct functions. I'm at the point where, after the
connection is established, we write to the stream... when the writing is
done, we will read from it, then tear the whole thing down.
overall, it turns out that doing "close to the metal" interaction with
c libraries is painful (and more chatty) when orchestrated from rust. My
understanding is that not much, at all, is written in this fashion in the
existant core/std codebase.. malloc'ing in C has been preferred, from what
I've gathered. So we're treading new ground, here!