This makes the included benchmark more than 3 times faster. Also,
`.unshift(x)` is now faster as `.insert(0, x)` which can reuse the
allocation if necessary.
For `str.as_mut_buf`, un-closure-ification is achieved by outright removal (see commit message). The others are replaced by `.as_ptr`, `.as_mut_ptr` and `.len`
`.as_mut_buf` was used exactly once, in `.push_char` which could be
written in a simpler way, using the `&mut ~[u8]` that it already
retrieved. In the rare situation when someone really needs
`.as_mut_buf`-like functionality (getting a `*mut u8`), they can go via
`str::raw::as_owned_vec`.
This code in resolve accidentally forced all types with an impl to become
public. This fixes it by default inheriting the privacy of what was previously
there and then becoming `true` if nothing else exits.
Closes#10545
### Remove {As,Into,To}{Option,Either,Result} traits.
Expanded, that is:
- `AsOption`
- `IntoOption`
- `ToOption`
- `AsEither`
- `IntoEither`
- `ToEither`
- `AsResult`
- `IntoResult`
- `ToResult`
These were defined for each other but never *used* anywhere. They are
all trivial and so removal will have negligible effect upon anyone.
`Either` has fallen out of favour (and its implementation of these
traits of dubious semantics), `Option<T>` → `Result<T, ()>` was never
really useful and `Result<T, E>` → `Option<T>` should now be done with
`Result.ok()` (mirrored with `Result.err()` for even more usefulness).
In summary, there's really no point in any of these remaining.
### Rename To{Str,Bytes}Consume traits to Into*.
That is:
- `ToStrConsume` → `IntoStr`;
- `ToBytesConsume` → `IntoBytes`.
This code in resolve accidentally forced all types with an impl to become
public. This fixes it by default inheriting the privacy of what was previously
there and then becoming `true` if nothing else exits.
Closes#10545
The removal of the aliasing &mut[] and &[] from `shift_opt` also comes with its simplification.
The above also allows the use of `copy_nonoverlapping_memory` in `[].copy_memory` (I did an audit of each use of `.copy_memory` and `std::vec::bytes::copy_memory`, and I believe none of them are called with arguments can ever alias). This changes requires that `unsafe` code using `copy_memory` **needs** to respect the aliasing rules of `&mut[]`.
This pull request completely rewrites std::comm and all associated users. Some major bullet points
* Everything now works natively
* oneshots have been removed
* shared ports have been removed
* try_recv no longer blocks (recv_opt blocks)
* constructors are now Chan::new and SharedChan::new
* failure is propagated on send
* stream channels are 3x faster
I have acquired the following measurements on this patch. I compared against Go, but remember that Go's channels are fundamentally different than ours in that sends are by-default blocking. This means that it's not really a totally fair comparison, but it's good to see ballpark numbers for anyway
```
oneshot stream shared1
std 2.111 3.073 1.730
my 6.639 1.037 1.238
native 5.748 1.017 1.250
go8 1.774 3.575 2.948
go8-inf slow 0.837 1.376
go8-128 4.832 1.430 1.504
go1 1.528 1.439 1.251
go2 1.753 3.845 3.166
```
I had three benchmarks:
* oneshot - N times, create a "oneshot channel", send on it, then receive on it (no task spawning)
* stream - N times, send from one task to another task, wait for both to complete
* shared1 - create N threads, each of which sends M times, and a port receives N*M times.
The rows are as follows:
* `std` - the current libstd implementation (before this pull request)
* `my` - this pull request's implementation (in M:N mode)
* `native` - this pull request's implementation (in 1:1 mode)
* `goN` - go's implementation with GOMAXPROCS=N. The only relevant value is 8 (I had 8 cores on this machine)
* `goN-X` - go's implementation where the channels in question were created with buffers of size `X` to behave more similarly to rust's channels.
* Streams are now ~3x faster than before (fewer allocations and more optimized)
* Based on a single-producer single-consumer lock-free queue that doesn't
always have to allocate on every send.
* Blocking via mutexes/cond vars outside the runtime
* Streams work in/out of the runtime seamlessly
* Select now works in/out of the runtime seamlessly
* Streams will now fail!() on send() if the other end has hung up
* try_send() will not fail
* PortOne/ChanOne removed
* SharedPort removed
* MegaPipe removed
* Generic select removed (only one kind of port now)
* API redesign
* try_recv == never block
* recv_opt == block, don't fail
* iter() == Iterator<T> for Port<T>
* removed peek
* Type::new
* Removed rt::comm
docs for copy_memory.
&mut [u8] and &[u8] really shouldn't be overlapping at all (part of the
uniqueness/aliasing guarantee of &mut), so no point in encouraging it.
Could prevent callers from catching the situation and lead to e.g early
iterator terminations (cf. `Reader::read_byte') since `None' is only to
be returned only on EOF.
Understand 'pkgid' in stage0. As a bonus, the snapshot now contains now metadata
(now that those changes have landed), and the snapshot download is half as large
as it used to be!
The problem was that std::run::Process::new() was unwrap()ing the result
of std::io::process::Process::new(), which returns None in the case
where the io_error condition is raised to signal failure to start the
process.
Have std::run::Process::new() similarly return an Option\<run::Process\>
to reflect the fact that a subprocess might have failed to start. Update
utility functions run::process_status() and run::process_output() to
return Option\<ProcessExit\> and Option\<ProcessOutput\>, respectively.
Various parts of librustc and librustpkg needed to be updated to reflect
these API changes.
closes#10754
The problem was that std::run::Process::new() was unwrap()ing the result
of std::io::process::Process::new(), which returns None in the case
where the io_error condition is raised to signal failure to start the
process.
Have std::run::Process::new() similarly return an Option<run::Process>
to reflect the fact that a subprocess might have failed to start. Update
utility functions run::process_status() and run::process_output() to
return Option<ProcessExit> and Option<ProcessOutput>, respectively.
Various parts of librustc and librustpkg needed to be updated to reflect
these API changes.
closes#10754
Expanded, that is:
- `AsOption`
- `IntoOption`
- `ToOption`
- `AsEither`
- `IntoEither`
- `ToEither`
- `AsResult`
- `IntoResult`
- `ToResult`
These were defined for each other but never *used* anywhere. They are
all trivial and so removal will have negligible effect upon anyone.
`Either` has fallen out of favour (and its implementation of these
traits of dubious semantics), `Option<T>` → `Result<T, ()>` was never
really useful and `Result<T, E>` → `Option<T>` should now be done with
`Result.ok()` (mirrored with `Result.err()` for even more usefulness).
In summary, there's really no point in any of these remaining.
This adds a bunch of useful Reader and Writer implementations. I'm not a
huge fan of the name `util` but I can't think of a better name and I
don't want to make `std::io` any longer than it already is.
This adds a bunch of useful Reader and Writer implementations. I'm not a
huge fan of the name `util` but I can't think of a better name and I
don't want to make `std::io` any longer than it already is.
- `Buffer.lines()` returns `LineIterator` which yields line using
`.read_line()`.
- `Reader.bytes()` now takes `&mut self` instead of `self`.
- `Reader.read_until()` swallows `EndOfFile`. This also affects
`.read_line()`.
This replaces the link meta attributes with a pkgid attribute and uses a hash
of this as the crate hash. This makes the crate hash computable by things
other than the Rust compiler. It also switches the hash function ot SHA1 since
that is much more likely to be available in shell, Python, etc than SipHash.
Fixes#10188, #8523.
This moves `std::rand::distribitions::{Normal, StandardNormal}` to `...::distributions::normal`, reexporting `Normal` from `distributions` (and similarly for `Exp` and Exp1`), and adds:
- Log-normal
- Chi-squared
- F
- Student T
all of which are implemented in C++11's random library. Tests in 0424b8aded. Note that these are approximately half documentation & half implementation (of which a significant portion is boilerplate `}`'s and so on).
This implements parts of the changes to `Result` and `Option` I proposed and discussed in this thread: https://mail.mozilla.org/pipermail/rust-dev/2013-November/006254.html
This PR includes:
- Adding `ok()` and `err()` option adapters for both `Result` variants.
- Removing `get_ref`, `expect` and iterator constructors for `Result`, as they are reachable with the variant adapters.
- Removing `Result`s `ToStr` bound on the error type because of composability issues. (See https://mail.mozilla.org/pipermail/rust-dev/2013-November/006283.html)
- Some warning cleanups
In order to keep up to date with changes to the libraries that `llvm-config`
spits out, the dependencies to the LLVM are a dynamically generated rust file.
This file is now automatically updated whenever LLVM is updated to get kept
up-to-date.
At the same time, this cleans out some old cruft which isn't necessary in the
makefiles in terms of dependencies.
Closes#10745Closes#10744
It's useful to allow users to get at the internal std::rc::comm::Port,
and other such fields, since they implement important traits like
Select.
See [rust-dev] "select on std::comm::Port and different types" at https://mail.mozilla.org/pipermail/rust-dev/2013-November/006735.html for background.
Right now, as pointed out in #8132, it is very easy to introduce a subtle race
in the runtime. I believe that this is the cause of the current flakiness on the
bots.
I have taken the last idea mentioned in that issue which is to use a lock around
descheduling and context switching in order to solve this race.
Closes#8132
Right now, as pointed out in #8132, it is very easy to introduce a subtle race
in the runtime. I believe that this is the cause of the current flakiness on the
bots.
I have taken the last idea mentioned in that issue which is to use a lock around
descheduling and context switching in order to solve this race.
Closes#8132
This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
The `integer_decode()` function decodes a float (f32/f64)
into integers containing the mantissa, exponent and sign.
It's needed for `rationalize()` implementation of #9838.
The code got ported from ABCL [1].
[1] http://abcl.org/trac/browser/trunk/abcl/src/org/armedbear/lisp/FloatFunctions.java?rev=14465#L94
I got the permission to use this code for Rust from Peter Graves (the ABCL copyright holder) . If there's any further IP clearance needed, let me know.
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
This registers new snapshots after the landing of #10528, and then goes on to tweak the build process to build a monolithic `rustc` binary for use in future snapshots. This mainly involved dropping the dynamic dependency on `librustllvm`, so that's now built as a static library (with a dynamically generated rust file listing LLVM dependencies).
This currently doesn't actually make the snapshot any smaller (24MB => 23MB), but I noticed that the executable has 11MB of metadata so once progress is made on #10740 we should have a much smaller snapshot.
There's not really a super-compelling reason to distribute just a binary because we have all the infrastructure for dealing with a directory structure, but to me it seems "more correct" that a snapshot compiler is just a `rustc` binary.
This method is the mutable version of ImmutableVector::split. It is
a DoubleEndedIterator, making mut_rsplit irrelevent. The size_hint
method is not optimal because of #9629.
At the same time, clarify *split* iterator doc.
mut_chunks() returns an iterator that produces mutable slices. This is the mutable version of the existing chunks() method on the ImmutableVector trait.
EDIT: This uses only safe code now.
PREVIOUSLY:
I tried to get this working with safe code only, but I couldn't figure out how to make that work. Before #8624, the exact same code worked without the need for the transmute() call. With that fix and without the transmute() call, the compiler complains about the call to mut_slice(). I think the issue is that the mutable slice that is returned will live longer than the self parameter since the self parameter doesn't have an explicit lifetime. However, that is the way that the Iterator trait defines the next() method. I'm sure there is a good reason for that, although I don't quite understand why. Anyway, I think the interface is safe, since the MutChunkIter will only hand out non-overlapping pointers and there is no way to get it to hand out the same pointer twice.
BufferedWriter::inner flushes before returning the underlying writer.
BufferedWriter::write no longer flushes the underlying writer.
LineBufferedWriter::write flushes up to the *last* newline in the input
string, not the first.
mut_chunks() returns an iterator that produces mutable slices. This is the
mutable version of the existing chunks() method on the ImmutableVector trait.
In this series of commits, I've implemented static linking for rust. The scheme I implemented was the same as my [mailing list post](https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html).
The commits have more details to the nitty gritty of what went on. I've rebased this on top of my native mutex pull request (#10479), but I imagine that it will land before this lands, I just wanted to pre-emptively get all the rebase conflicts out of the way (becuase this is reorganizing building librustrt as well).
Some contentious points I want to make sure are all good:
* I've added more "compiler chooses a default" behavior than I would like, I want to make sure that this is all very clearly outlined in the code, and if not I would like to remove behavior or make it clearer.
* I want to make sure that the new "fancy suite" tests are ok (using make/python instead of another rust crate)
If we do indeed pursue this, I would be more than willing to write up a document describing how linking in rust works. I believe that this behavior should be very understandable, and the compiler should never hinder someone just because linking is a little fuzzy.
BufferedWriter::inner flushes before returning the underlying writer.
BufferedWriter::write no longer flushes the underlying writer.
LineBufferedWriter::write flushes up to the *last* newline in the input
string, not the first.
This commit alters the build process of the compiler to build a static
librustrt.a instead of a dynamic version. This means that we can stop
distributing librustrt as well as default linking against it in the compiler.
This also means that if you attempt to build rust code without libstd, it will
no longer work if there are any landing pads in play. The reason for this is
that LLVM and rustc will emit calls to the various upcalls in librustrt used to
manage exception handling. In theory we could split librustrt into librustrt and
librustupcall. We would then distribute librustupcall and link to it for all
programs using landing pads, but I would rather see just one librustrt artifact
and simplify the build process.
The major benefit of doing this is that building a static rust library for use
in embedded situations all of a sudden just became a whole lot more feasible.
Closes#3361
This commit implements the support necessary for generating both intermediate
and result static rust libraries. This is an implementation of my thoughts in
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html.
When compiling a library, we still retain the "lib" option, although now there
are "rlib", "staticlib", and "dylib" as options for crate_type (and these are
stackable). The idea of "lib" is to generate the "compiler default" instead of
having too choose (although all are interchangeable). For now I have left the
"complier default" to be a dynamic library for size reasons.
Of the rust libraries, lib{std,extra,rustuv} will bootstrap with an
rlib/dylib pair, but lib{rustc,syntax,rustdoc,rustpkg} will only be built as a
dynamic object. I chose this for size reasons, but also because you're probably
not going to be embedding the rustc compiler anywhere any time soon.
Other than the options outlined above, there are a few defaults/preferences that
are now opinionated in the compiler:
* If both a .dylib and .rlib are found for a rust library, the compiler will
prefer the .rlib variant. This is overridable via the -Z prefer-dynamic option
* If generating a "lib", the compiler will generate a dynamic library. This is
overridable by explicitly saying what flavor you'd like (rlib, staticlib,
dylib).
* If no options are passed to the command line, and no crate_type is found in
the destination crate, then an executable is generated
With this change, you can successfully build a rust program with 0 dynamic
dependencies on rust libraries. There is still a dynamic dependency on
librustrt, but I plan on removing that in a subsequent commit.
This change includes no tests just yet. Our current testing
infrastructure/harnesses aren't very amenable to doing flavorful things with
linking, so I'm planning on adding a new mode of testing which I believe belongs
as a separate commit.
Closes#552
- Removed module reexport workaround for the integer module macros
- Removed legacy reexports of `cmp::{min, max}` in the integer module macros
- Combined a few macros in `vec` into one
- Documented a few issues
This adds an implementation of the Chase-Lev work-stealing deque to libstd
under std::rt::deque. I've been unable to break the implementation of the deque
itself, and it's not super highly optimized just yet (everything uses a SeqCst
memory ordering).
The major snag in implementing the chase-lev deque is that the buffers used to
store data internally cannot get deallocated back to the OS. In the meantime, a
shared buffer pool (synchronized by a normal mutex) is used to
deallocate/allocate buffers from. This is done in hope of not overcommitting too
much memory. It is in theory possible to eventually free the buffers, but one
must be very careful in doing so.
I was unable to get some good numbers from src/test/bench tests (I don't think
many of them are slamming the work queue that much), but I was able to get some
good numbers from one of my own tests. In a recent rewrite of select::select(),
I found that my implementation was incredibly slow due to contention on the
shared work queue. Upon switching to the parallel deque, I saw the contention
drop to 0 and the runtime go from 1.6s to 0.9s with the most amount of time
spent in libuv awakening the schedulers (plus allocations).
Closes#4877
I have written some benchmark tests to `push`, `push_many`, `join`,
`join_many` and `ends_with_path`.
Let me know what you think (@cmr).
Thanks in advance.
This has one commit from a separate pull request (because these commits depend on that one), but otherwise the extra details can be found in the commit messages. The `rt::thread` module has been generally cleaned up for everyday safe usage (and it's a bug if it's not safe).
* Added doc comments explaining what all public functionality does.
* Added the ability to spawn a detached thread
* Added the ability for the procs to return a value in 'join'
I've noticed I use this pattern quite a bit:
~~~rust
do spawn {
loop {
match port.try_recv() {
Some(x) => ...,
None => ...,
}
}
}
~~~
The `RecvIterator`, returned from a default `recv_iter` method on the `GenericPort` trait, allows you to reduce this down to:
~~~rust
do spawn {
for x in port.recv_iter() {
...
}
}
~~~
As demonstrated in the tests, you can also access the port from within the `for` block for further `recv`ing and `peek`ing with no borrow errors, which is quite nice.
Whenever the runtime is shut down, add a few hooks to clean up some of the
statically initialized data of the runtime. Note that this is an unsafe
operation because there's no guarantee on behalf of the runtime that there's no
other code running which is using the runtime.
This helps turn down the noise a bit in the valgrind output related to
statically initialized mutexes. It doesn't turn the noise down to 0 because
there are still statically initialized mutexes in dynamic_lib and
os::with_env_lock, but I believe that it would be easy enough to add exceptions
for those cases and I don't think that it's the runtime's job to go and clean up
that data.
This moves the locking/waiting methods to returning an RAII struct instead of
relying on closures. Additionally, this changes the methods to all take
'&mut self' to discourage recursive locking. The new method to block is to call
`wait` on the returned RAII structure instead of calling it on the lock itself
(this enforces that the lock is held).
At the same time, this improves the Mutex interface a bit by allowing
destruction of non-initialized members and by allowing construction of an empty
mutex (nothing initialized inside).
This patchset fixes some parts broken on Win64.
This also adds `--disable-pthreads` flags to llvm on mingw-w64 archs (both 32-bit and 64-bit, not mingw) due to bad performance. See #8996 for discussion.
This moves the locking/waiting methods to returning an RAII struct instead of
relying on closures. Additionally, this changes the methods to all take
'&mut self' to discourage recursive locking. The new method to block is to call
`wait` on the returned RAII structure instead of calling it on the lock itself
(this enforces that the lock is held).
At the same time, this improves the Mutex interface a bit by allowing
destruction of non-initialized members and by allowing construction of an empty
mutex (nothing initialized inside).
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
Make TrieMap/TrieSet's find_mut check the key for external nodes.
Without this find_mut sometimes returns a reference to another key when
querying for a non-present key.
This is based off of @blake2-ppc's work on #9429. That PR bitrotted and I haven't been able to contact the original author so I decided to take up the cause.
Overview
======
`Mut` encapsulates a mutable, non-nullable slot. The `Cell` type is currently used to do this, but `Cell` is much more commonly used as a workaround for the inability to move values into non-once functions. `Mut` provides a more robust API.
`Mut` duplicates the semantics of borrowed pointers with enforcement at runtime instead of compile time.
```rust
let x = Mut::new(0);
{
// make some immutable borrows
let p = x.borrow();
let y = *p.get() + 10;
// multiple immutable borrows are allowed simultaneously
let p2 = x.borrow();
// this would throw a runtime failure
// let p_mut = x.borrow_mut();
}
// now we can mutably borrow
let p = x.borrow_mut();
*p.get() = 10;
```
`borrow` returns a `Ref` type and `borrow_mut` returns a `RefMut` type, both of which are simple smart pointer types with a single method, `get`, which returns a reference to the wrapped data.
This also allows `RcMut<T>` to be deleted, as it can be replaced with `Rc<Mut<T>>`.
Changes
======
I've done things a little bit differently than the original proposal.
* I've added `try_borrow` and `try_borrow_mut` methods that return `Option<Ref<T>>` and `Option<RefMut<T>>` respectively instead of failing on a borrow check failure. I'm not totally sure when that'd be useful, but I don't see any reason to not put them in and @cmr requested them.
* `ReadPtr` and `WritePtr` have been renamed to `Ref` and `RefMut` respectively, as `Ref` is to `ref foo` and `RefMut` is to `ref mut foo` as `Mut` is to `mut foo`.
* `get` on `MutRef` now takes `&self` instead of `&mut self` for consistency with `&mut`. As @alexcrichton pointed, out this violates soundness by allowing aliasing `&mut` references.
* `Cell` is being left as is. It solves a different problem than `Mut` is designed to solve.
* There are no longer methods implemented for `Mut<Option<T>>`. Since `Cell` isn't going away, there's less of a need for these, and I didn't feel like they provided a huge benefit, especially as that kind of `impl` is very uncommon in the standard library.
Open Questions
============
* `Cell` should now be used exclusively for movement into closures. Should this be enforced by reducing its API to `new` and `take`? It seems like this use case will be completely going away once the transition to `proc` and co. finishes.
* Should there be `try_map` and `try_map_mut` methods along with `map` and `map_mut`?
I cannot tell whether the original comment was unsure about the
arithmetic calculations, or if it was unsure about the assumptions
being made about the alignment of the current allocation pointer.
The arithmetic calculation looks fine to me, though. This technique
is documented e.g. in Henry Warren's "Hacker's Delight" (section 3-1).
(I am sure one can find it elsewhere too, its not an obscure
property.)
I cannot tell whether the original comment was unsure about the
arithmetic calculations, or if it was unsure about the assumptions
being made about the alignment of the current allocation pointer.
The arithmetic calculation looks fine to me, though. This technique
is documented e.g. in Henry Warren's "Hacker's Delight" (section 3-1).
(I am sure one can find it elsewhere too, its not an obscure
property.)
New benchmark tests in vec.rs:
`push`, `starts_with_same_vector`, `starts_with_single_element`,
`starts_with_diff_one_element_end`, `ends_with_same_vector`,
`ends_with_single_element`, `ends_with_diff_one_element_beginning` and
`contains_last_element`
This isn't very useful yet, but it does replace most functionality of `@T`. The `Mut<T>` type will make it unnecessary to have a `GcMut<T>` so I haven't included one. Obviously it doesn't work for trait objects but that needs to be figured out for `Rc<T>` too.