.. there are some additional FIXME nags in net_tcp (L 1012) about blocking
because libuv is holding unsafe ptrs to task local data. the proposed
fix going is not really feasible w/ the current design, IMO, but i'll
leave it there in case someone really wants to make the case without
creating more hassle than it's worth.
now the best of what we had prior to libuv integration (proper
validation of an ipv4 string), along with libuv support
(initial ipv6 support)
libuv has even weaker facilities for validating an input ipv6
(but still more than what we had), so eventually the "right"
answer would be to roll a proper ipv6 address string parser
in rust
libuv's own ip vetting code appears to in a somewhat woeful state,
for both ipv4 and ipv6 (there are some notes in the tests for net_ip, as
well as stuff added in uv_ll). They are aware of this and welcome patches.
I have rudimentary code in place that can verify whether the provided str
ip was, in fact, validly parsed by libuv, making a few assumptions:
* for ipv4, we assume that the platform's INADDR_NONE val is 0xffffffff ,
I should write a helper to return this value from the platform's libc
headers instead of hard-coding it in rust.
* for ipv6, we assume that the library will always return '::' for
malformed inputs.. as is the case in 64bit ubuntu. I need to verify this
on other platforms.. but at least the debugging output is in place, so
if expectations don't line up, it'll be straightforward to address
.. but the test is kind of broken.. it appears that rust pads structs for
alignment purposes? I can't get the struct to == 28.. that appears to
be the native size of sockaddr_in6.. so we have a size 32 struct, for now.
i mistook an "unconstrained type" error, due to type-inference messup
because i didnt have return vals in some closure wired-up right, for being
due to not having a str as a str/& (a str will actually auto-coerce to a
str/&, so str::as_slice was erroneously added. my bad).
* updated rustdoc info for several functions
* changed read_stop to take control of the port returned by read_start
* made write_future do an explicit data copy with the binary vector it is
passed
This comes with a terminology change. All linkage-symbols are 'extern'
now, including rust syms in other crates. Some extern ABIs are
merely "foreign". The term "native" is retired, not clear/useful.
What was "crust" is now "extern" applied to a _definition_. This
is a bit of an overloading, but should be unambiguous: it means
that the definition should be made available to some non-rust ABI.
- change port of tcp server test in uv_ll to avoid conflict w/ test in
net::tcp
- a few places the tcp::read fn is used in test w/ a timeout.. suspend
use of the timeout from here on out.
* there are a few places where I was experimenting w/ using `alt` in places
where `if`/`else` would've sufficed. don't drink the koolaid!
* I had an unneeded `else` structure (the `if` branch that preceeded
concluded with a `fail` statement.. I added the `fail` later in the dev
cycle for this branch, so I forgot to remove the `else` after doing so)
* consistent wrt `prop_name: value` vs. `prop_name : value` in record decl
and initialization
* change an `alt` exp on an `ip_addr` to actually be exhaustive,
instead of using a catch-all clause
.. this test fails frequently, locally, when ran with the batch of other
global_loop tests running due to how valgrind deals with multithreading
in the test app. not sure what to do, here.
- we now have two interfaces for the TCP/IP server/listener workflow,
based on different user approaches surrounding how to deal with the
flow of accept a new tcp connection:
1. the "original" API closely mimics the low-level libuv API, in that we
have an on_connect_cb that the user provides *that is ran on the libuv
thread*. In this callback, the user can accept() a connection, turning it
into a tcp_socket.. of course, before accepting, they have the option
of passing it to a new task, provided they *make the cb block until
the accept is done* .. this is because, in libuv, you have to do the
uv_accept call in the span of that on_connect_cb callback that gets fired
when a new connection comes in. thems the breaks..
I wanted to just get rid of this API, because the general proposition of
users always running code on the libuv thread sounds like an invitation
for many future headaches. the API restriction to have to choose to
immediately accept a connection (and allow the user to block libuv as
needed) isn't too bad for power users who could conceive of circumstances
where they would drop an incoming TCP connection and know what they're
doing, in general.
but as a general API, I thought this was a bit cumbersome, so I ended up
devising..
2. an API that is initiated with a call to `net::tcp::new_listener()` ..
has a similar signature to `net::tcp::listen()`, except that is just
returns an object that sort of behaves like a `comm::port`. Users can
block on the `tcp_conn_port` to receive new connections, either in the
current task or in a new task, depending on which API route they take
(`net::tcp::conn_recv` or `net::tcp::conn_recv_spawn` respectively).. there
is also a `net::tcp::conn_peek` function that will do a peek on the
underlying port to see if there are pending connections.
The main difference, with this API, is that the low-level libuv glue is
going to *accept every connection attempt*, along with the overhead that
that brings. But, this is a much more hassle-free API for 95% of use
cases and will probably be the one that most users will want to reach for.
.. turns out that, without the export, the modules aren't accessible
outside of the crate, itself. I thought that, by importing some module
into another (nesting it) and exporting from that nested module (which
is, itself, exported from std.rc) that my mod would be in the build
artifact. This doesn't appear to be the case. learning is fun!