With Rc no longer trying to statically prevent cycles (and thus no
longer using the Freeze bound), it seems appropriate to remove that
restriction from MutexArc as well.
Closes#9251.
Closes#11692. Instead of returning the original expression, a dummy expression
(with identical span) is returned. This prevents infinite loops of failed
expansions as well as odd double error messages in certain situations.
The details can be found in the comments I added to the test, but the gist of it
is that capturing output injects rescheduling a green task on failure, which
wasn't desired for the test in question.
cc #12340
The 'do' keyword was deprecated in 0.10 #11868 , and is keep as
reserved keyword #12157 .
So the tutorial part about it doesn't make sense.
The spawning explanation was move into '15.2 Closure compatibility'.
Fixing misspelling.
Thanks for precisions.
Moved from 15.2 to 15.1.
Fixed typo, and apply pnkfelix advices.
See the commit messages for more details, but this makes `std::str::is_utf8` slightly faster and 100% non-`unsafe` and uses a similar thing to make the first scan of `from_utf8_lossy` 100% safe & faster.
This uses a vector iterator to avoid the necessity for unsafe indexing,
and makes this function slightly faster. Unfortunately #11751 means that
the iterator comes with repeated `null` checks which means the
pure-ASCII case still has room for significant improvement (and the
other cases too, but it's most significant for just ASCII).
Before:
is_utf8_100_ascii ... bench: 143 ns/iter (+/- 6)
is_utf8_100_multibyte ... bench: 134 ns/iter (+/- 4)
After:
is_utf8_100_ascii ... bench: 123 ns/iter (+/- 4)
is_utf8_100_multibyte ... bench: 115 ns/iter (+/- 5)
There's a few parts to this PR
* Implement unix pipes in libnative for unix platforms (thanks @Geal!)
* Implement named pipes in libnative for windows (terrible, terrible code)
* Remove `#[cfg(unix)]` from `mod unix` in `std::io::net`. This is a terrible name for what it is, but that's the topic of #12093.
The windows implementation was significantly more complicated than I thought it would be, but it seems to be passing all the tests. now.
Closes#11201
Extends the license and formatting check to `*.js` files in `src/doc` and `*.sh`, `*.pl`, `*.c`, and `*.h` files in `src/etc`. As best as I could tell, these files should be covered under the Rust project license.
cc @brson: Do any other scripts need a license? I'd like to double-check that this PR closes#4534.
Delete all the documentation from std::task that references linked
failure.
Tweak TaskBuilder to be more builder-like. `.name()` is now `.named()` and
`.add_wrapper()` is now `.with_wrapper()`. Remove `.watched()` and
`.unwatched()` as they didn't actually do anything.
Closes#6399.
The details can be found in the comments I added to the test, but the gist of it
is that capturing output injects rescheduling a green task on failure, which
wasn't desired for the test in question.
cc #12340
This deadlock was caused when the channel was closed at just the right time, so
the extra `self.cnt.fetch_add` actually should have preserved the DISCONNECTED
state of the channel. by modifying this the channel entered a state such that
the port would never succeed in dropping.
This also moves the increment of self.steals until after the MAX_STEALS block.
The reason for this is that in 'fn recv()' the steals variable is decremented
immediately after the try_recv(), which could in theory set steals to -1 if it
was previously set to 0 in try_recv().
Closes#12340
This is inspired by the [function naming in the Julia standard library](http://docs.julialang.org/en/release-0.2/stdlib/base/#Base.count_ones). It seems like a more self-explanatory name, and is more consistent with the accompanying methods, `leading_zeros` and `trailing_zeros`.
This replaces the iterator with one that handles lone surrogates
gracefully and uses that to implement `from_utf16_lossy` which replaces
invalid `u16`s with U+FFFD.
Work toward #9876.
This adds `prepare.mk`, which is simply a more heavily-parameterized `install.mk`, then uses `prepare` to implement both `install` and the windows installer (`dist`). Smoke tested on both Linux and Windows.
With Rc no longer trying to statically prevent cycles (and thus no
longer using the Freeze bound), it seems appropriate to remove that
restriction from MutexArc as well.
* Implementation of pipe_win32 filled out for libnative
* Reorganize pipes to be clone-able
* Fix a few file descriptor leaks on error
* Factor out some common code into shared functions
* Make use of the if_ok!() macro for less indentation
Closes#11201
Delete all the documentation from std::task that references linked
failure.
Tweak TaskBuilder to be more builder-like. .name() is now .named() and
.add_wrapper() is now .with_wrapper(). Remove .watched() and
.unwatched() as they didn't actually do anything.
This renames the `n*` and `n*_ref` tuple getters to `val*` and `ref*` respectively, and adds `mut*` getters. It also removes the `CloneableTuple` and `ImmutableTuple` traits.
The previous code erroneously assumed that 'steals > cnt' was always true, but
that was a false assumption. The code was altered to decrement steals to a
minimum of 0 instead of taking all of cnt into account.
I didn't include the exact test from #12295 because it could run for quite
awhile, and instead set the threshold for MAX_STEALS to much lower during
testing. I found that this triggered the old bug quite frequently when running
without this fix.
Closes#12295
This is useful in contexts like this:
```rust
let size = rdr.read_be_i32() as uint;
let mut limit = LimitReader::new(rdr.by_ref(), size);
let thing = read_a_thing(&mut limit);
assert!(limit.limit() == 0);
```
The previous code erroneously assumed that 'steals > cnt' was always true, but
that was a false assumption. The code was altered to decrement steals to a
minimum of 0 instead of taking all of cnt into account.
I didn't include the exact test from #12295 because it could run for quite
awhile, and instead set the threshold for MAX_STEALS to much lower during
testing. I found that this triggered the old bug quite frequently when running
without this fix.
Closes#12295
- adds a `LockGuard` type returned by `.lock` and `.trylock` that unlocks the mutex in the destructor
- renames `mutex::Mutex` to `StaticNativeMutex`
- adds a `NativeMutex` type with a destructor
- removes `LittleLock`
- adds `#[must_use]` to `sync::mutex::Guard` to remind people to use it
This is useful in contexts like this:
let size = rdr.read_be_i32() as uint;
let mut limit = LimitReader::new(rdr.by_ref(), size);
let thing = read_a_thing(&mut limit);
assert!(limit.limit() == 0);
Function parameters that are to be passed by value but don't fit into a
single register are currently passed by creating a copy on the stack and
passing a pointer to that copy to the callee. Since the copy is made
just for the function call, there are no aliases.
For example, this sometimes allows LLVM to eliminate unnecessary calls
to drop glue. Given
````rust
struct Foo {
a: int,
b: Option<~str>,
}
extern {
fn eat(eat: Option<~str>);
}
pub fn foo(v: Foo) {
match v {
Foo { a: _, b } => unsafe { eat(b) }
}
}
````
LLVM currently can't eliminate the drop call for the string, because it
only sees a _pointer_ to Foo, for which it has to expect an alias. So we
get:
````llvm
; Function Attrs: uwtable
define void @_ZN3foo20h9f32c90ae7201edbxaa4v0.0E(%struct.Foo* nocapture) unnamed_addr #0 {
"_ZN34std..option..Option$LT$$UP$str$GT$9glue_drop17hc39b3015f3b9c69dE.exit":
%1 = getelementptr inbounds %struct.Foo* %0, i64 0, i32 1, i32 0
%2 = load { i64, i64, [0 x i8] }** %1, align 8
store { i64, i64, [0 x i8] }* null, { i64, i64, [0 x i8] }** %1, align 8
%3 = ptrtoint { i64, i64, [0 x i8] }* %2 to i64
%.fca.0.insert = insertvalue { i64 } undef, i64 %3, 0
tail call void @eat({ i64 } %.fca.0.insert)
%4 = load { i64, i64, [0 x i8] }** %1, align 8
%5 = icmp eq { i64, i64, [0 x i8] }* %4, null
br i1 %5, label %_ZN3Foo9glue_drop17hf611996539d3036fE.exit, label %"_ZN8_$UP$str9glue_drop17h15dbdbe2b8897a98E.exit.i.i"
"_ZN8_$UP$str9glue_drop17h15dbdbe2b8897a98E.exit.i.i": ; preds = %"_ZN34std..option..Option$LT$$UP$str$GT$9glue_drop17hc39b3015f3b9c69dE.exit"
%6 = bitcast { i64, i64, [0 x i8] }* %4 to i8*
tail call void @free(i8* %6) #1
br label %_ZN3Foo9glue_drop17hf611996539d3036fE.exit
_ZN3Foo9glue_drop17hf611996539d3036fE.exit: ; preds = %"_ZN34std..option..Option$LT$$UP$str$GT$9glue_drop17hc39b3015f3b9c69dE.exit", %"_ZN8_$UP$str9glue_drop17h15dbdbe2b8897a98E.exit.i.i"
ret void
}
````
But with the `noalias` attribute, it can safely optimize that to:
````llvm
define void @_ZN3foo20hd28431f929f0d6c4xaa4v0.0E(%struct.Foo* noalias nocapture) unnamed_addr #0 {
_ZN3Foo9glue_drop17he9afbc09d4e9c851E.exit:
%1 = getelementptr inbounds %struct.Foo* %0, i64 0, i32 1, i32 0
%2 = load { i64, i64, [0 x i8] }** %1, align 8
store { i64, i64, [0 x i8] }* null, { i64, i64, [0 x i8] }** %1, align 8
%3 = ptrtoint { i64, i64, [0 x i8] }* %2 to i64
%.fca.0.insert = insertvalue { i64 } undef, i64 %3, 0
tail call void @eat({ i64 } %.fca.0.insert)
ret void
}
````
Function parameters that are to be passed by value but don't fit into a
single register are currently passed by creating a copy on the stack and
passing a pointer to that copy to the callee. Since the copy is made
just for the function call, there are no aliases.
For example, this sometimes allows LLVM to eliminate unnecessary calls
to drop glue. Given
````rust
struct Foo {
a: int,
b: Option<~str>,
}
extern {
fn eat(eat: Option<~str>);
}
pub fn foo(v: Foo) {
match v {
Foo { a: _, b } => unsafe { eat(b) }
}
}
````
LLVM currently can't eliminate the drop call for the string, because it
only sees a _pointer_ to Foo, for which it has to expect an alias. So we
get:
````llvm
; Function Attrs: uwtable
define void @_ZN3foo20h9f32c90ae7201edbxaa4v0.0E(%struct.Foo* nocapture) unnamed_addr #0 {
"_ZN34std..option..Option$LT$$UP$str$GT$9glue_drop17hc39b3015f3b9c69dE.exit":
%1 = getelementptr inbounds %struct.Foo* %0, i64 0, i32 1, i32 0
%2 = load { i64, i64, [0 x i8] }** %1, align 8
store { i64, i64, [0 x i8] }* null, { i64, i64, [0 x i8] }** %1, align 8
%3 = ptrtoint { i64, i64, [0 x i8] }* %2 to i64
%.fca.0.insert = insertvalue { i64 } undef, i64 %3, 0
tail call void @eat({ i64 } %.fca.0.insert)
%4 = load { i64, i64, [0 x i8] }** %1, align 8
%5 = icmp eq { i64, i64, [0 x i8] }* %4, null
br i1 %5, label %_ZN3Foo9glue_drop17hf611996539d3036fE.exit, label %"_ZN8_$UP$str9glue_drop17h15dbdbe2b8897a98E.exit.i.i"
"_ZN8_$UP$str9glue_drop17h15dbdbe2b8897a98E.exit.i.i": ; preds = %"_ZN34std..option..Option$LT$$UP$str$GT$9glue_drop17hc39b3015f3b9c69dE.exit"
%6 = bitcast { i64, i64, [0 x i8] }* %4 to i8*
tail call void @free(i8* %6) #1
br label %_ZN3Foo9glue_drop17hf611996539d3036fE.exit
_ZN3Foo9glue_drop17hf611996539d3036fE.exit: ; preds = %"_ZN34std..option..Option$LT$$UP$str$GT$9glue_drop17hc39b3015f3b9c69dE.exit", %"_ZN8_$UP$str9glue_drop17h15dbdbe2b8897a98E.exit.i.i"
ret void
}
````
But with the `noalias` attribute, it can safely optimize that to:
````llvm
define void @_ZN3foo20hd28431f929f0d6c4xaa4v0.0E(%struct.Foo* noalias nocapture) unnamed_addr #0 {
_ZN3Foo9glue_drop17he9afbc09d4e9c851E.exit:
%1 = getelementptr inbounds %struct.Foo* %0, i64 0, i32 1, i32 0
%2 = load { i64, i64, [0 x i8] }** %1, align 8
store { i64, i64, [0 x i8] }* null, { i64, i64, [0 x i8] }** %1, align 8
%3 = ptrtoint { i64, i64, [0 x i8] }* %2 to i64
%.fca.0.insert = insertvalue { i64 } undef, i64 %3, 0
tail call void @eat({ i64 } %.fca.0.insert)
ret void
}
````
Change `os::args()` and `os::env()` to use `str::from_utf8_lossy()`.
Add new functions `os::args_as_bytes()` and `os::env_as_bytes()` to retrieve the args/env as byte vectors instead.
The existing methods were left returning strings because I expect that the common use-case is to want string handling.
Fixes#7188.
It's too easy to forget the `rust` tag to have a code example tested, and it's
far more common to have testable code than untestable code.
This alters rustdoc to have only two directives, `ignore` and `should_fail`. The
`ignore` directive ignores the code block entirely, and the `should_fail`
directive has been fixed to only fail the test if the code execution fails, not
also compilation.
Parse the environment by default with from_utf8_lossy. Also provide
byte-vector equivalents (e.g. os::env_as_bytes()).
Unfortunately, setenv() can't have a byte-vector equivalent because of
Windows support, unless we want to define a setenv_bytes() that fails
under Windows for non-UTF8 (or non-UTF16).
os::args() was using str::raw::from_c_str(), which would assert if the
C-string wasn't valid UTF-8. Switch to using from_utf8_lossy() instead,
and add a separate function os::args_as_bytes() that returns the ~[u8]
byte-vectors instead.
The std macros used to be injected with a filename of "<std-macros>", but macros
are now injected with a filename of "<{} macros>" where `{}` is filled in with
the crate name. This updates rustdoc to understand this new system so it'll
render source more frequently.
Strip trait impls for types that are stripped either due to the strip-hidden or strip-private passes.
This fixes the search index including trait methods on stripped structs (which breaks searching), and it also removes private types from the implementors list of a trait.
Fixes#9981 and #11439.
The std macros used to be injected with a filename of "<std-macros>", but macros
are now injected with a filename of "<{} macros>" where `{}` is filled in with
the crate name. This updates rustdoc to understand this new system so it'll
render source more frequently.
In strip-private, also strip impls of traits for private types. This
fixes the search index so searching for "drop", "eq", etc doesn't throw
an exception.
This will hopefully bring us closer to #11937. We're still using gcc's idea of
"startup files", but this should prevent us from leaking in dependencies that we
don't quite want (libgcc for example once compiler-rt is what we use).
This will hopefully bring us closer to #11937. We're still using gcc's idea of
"startup files", but this should prevent us from leaking in dependencies that we
don't quite want (libgcc for example once compiler-rt is what we use).
Now that fold_item can return multiple items, this is pretty trivial. It
also recursively expands generated items so ItemDecorators can generate
items that are tagged with ItemDecorators!
Closes#4913
When tests fail, their stdout and stderr is printed as part of the summary, but
this helps suppress failure messages from #[should_fail] tests and generally
clean up the output of the test runner.
Includes an upstream commit by pcwalton to improve codegen of our enums getting
moved around.
This also introduces a new commit on top of our stack of patches to fix a mingw32 build issue. I have submitted the patch upstream: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140210/204653.html
I verified that this builds on the try bots, which amazes me because I think that c++11 is turned on now, but I guess we're still lucky!
Closes#10613 (pcwalton's patch landed)
Closes#11992 (llvm has removed these options)
The old method of building up a list of items and threading it through
all of the decorators was unwieldy and not really scalable as
non-deriving ItemDecorators become possible. The API is now that the
decorator gets an immutable reference to the item it's attached to, and
a callback that it can pass new items to. If we want to add syntax
extensions that can modify the item they're attached to, we can add that
later, but I think it'll have to be separate from ItemDecorator to avoid
strange ordering issues.
@huonw
Any single-threaded task benchmark will spend a good chunk of time in `kqueue()` on osx and `epoll()` on linux, and the reason for this is that each time a task is terminated it will hit the syscall. When a task terminates, it context switches back to the scheduler thread, and the scheduler thread falls out of `run_sched_once` whenever it figures out that it did some work.
If we know that `epoll()` will return nothing, then we can continue to do work locally (only while there's work to be done). We must fall back to `epoll()` whenever there's active I/O in order to check whether it's ready or not, but without that (which is largely the case in benchmarks), we can prevent the costly syscall and can get a nice speedup.
I've separated the commits into preparation for this change and then the change itself, the last commit message has more details.
The old method of building up a list of items and threading it through
all of the decorators was unwieldy and not really scalable as
non-deriving ItemDecorators become possible. The API is now that the
decorator gets an immutable reference to the item it's attached to, and
a callback that it can pass new items to. If we want to add syntax
extensions that can modify the item they're attached to, we can add that
later, but I think it'll have to be separate from ItemDecorator to avoid
strange ordering issues.
These commits pick off some low-hanging fruit which were slowing down spawning green threads. The major speedup comes from fixing a bug in stack caching where we never used any cached stacks!
The program I used to benchmark is at the end. It was compiled with `rustc --opt-level=3 bench.rs --test` and run as `RUST_THREADS=1 ./bench --bench`. I chose to use `RUST_THREADS=1` due to #11730 as the profiles I was getting interfered too much when all the schedulers were in play (and shouldn't be after #11730 is fixed). All of the units below are in ns/iter as reported by `--bench` (lower is better).
| | green | native | raw |
| ------------- | ----- | ------ | ------ |
| osx before | 12699 | 24030 | 19734 |
| linux before | 10223 | 125983 | 122647 |
| osx after | 3847 | 25771 | 20835 |
| linux after | 2631 | 135398 | 122765 |
Note that this is *not* a benchmark of spawning green tasks vs native tasks. I put in the native numbers just to get a ballpark of where green tasks are. This is benchmark is *clearly* benefiting from stack caching. Also, OSX is clearly not 5x faster than linux, I think my VM is just much slower.
All in all, this ended up being a nice 4x speedup for spawning a green task when you're using a cached stack.
```rust
extern mod extra;
extern mod native;
use std::rt:🧵:Thread;
#[bench]
fn green(bh: &mut extra::test::BenchHarness) {
let (p, c) = SharedChan::new();
bh.iter(|| {
let c = c.clone();
spawn(proc() {
c.send(());
});
p.recv();
});
}
#[bench]
fn native(bh: &mut extra::test::BenchHarness) {
let (p, c) = SharedChan::new();
bh.iter(|| {
let c = c.clone();
native::task::spawn(proc() {
c.send(());
});
p.recv();
});
}
#[bench]
fn raw(bh: &mut extra::test::BenchHarness) {
bh.iter(|| {
Thread::start(proc() {}).join()
});
}
```
Two unfortunate allocations were wrapping a proc() in a proc() with
GreenTask::build_start_wrapper, and then boxing this proc in a ~proc() inside of
Context::new(). Both of these allocations were a direct result from two
conditions:
1. The Context::new() function has a nice api of taking a procedure argument to
start up a new context with. This inherently required an allocation by
build_start_wrapper because extra code needed to be run around the edges of a
user-provided proc() for a new task.
2. The initial bootstrap code only understood how to pass one argument to the
next function. By modifying the assembly and entry points to understand more
than one argument, more information is passed through in registers instead of
allocating a pointer-sized context.
This is sadly where I end up throwing mips under a bus because I have no idea
what's going on in the mips context switching code and don't know how to modify
it.
Closes#7767
cc #11389
Instead, use an enum to allow running both a procedure and sending the task
result over a channel. I expect the common case to be sending on a channel (e.g.
task::try), so don't require an extra allocation in the common case.
cc #11389
The condition was the wrong direction and it also didn't take equality into
account. Tests were added for both cases.
For the small benchmark of `task::try(proc() {}).unwrap()`, this takes the
iteration time on OSX from 15119 ns/iter to 6179 ns/iter (timed with
RUST_THREADS=1)
cc #11389
The first setp for #9880 is to add a new `crate` keyword. This PR does exactly that. I took a chance to refactor `parse_item_foreign_mod` and I broke it down into 2 separate methods to isolate each feature.
The next step will be to push a new stage0 snapshot and then get rid of all `extern mod` around the code.
If you were writing to something along the lines of `self.foo` then with the new
closure rules it meant that you were borrowing `self` for the entirety of the
closure, meaning that you couldn't format other fields of `self` at the same
time as writing to a buffer contained in `self`.
By lifting the borrow outside of the closure the borrow checker can better
understand that you're only borrowing one of the fields at a time. This had to
use type ascription as well in order to preserve trait object coercions.
It asserted that the previous count was always nonnegative, but DISCONNECTED is
a valid value for it to see. In order to continue to remember to store
DISCONNECTED after DISCONNECTED was seen, I also added a helper method.
Closes#12226
The `id` shouldn't be changed by external code, and exposing it publicly
allows to be accidentally changed.
Also, remove the first element special case in the `select!` macro.
::num::bigint, Remove a source of O(n^2) running time in `fn shr_bits`.
I'll cut to the chase: On my laptop, this brings the running time on
`pidigits 2000` (from src/test/bench/shootout-pidigits.rs) from this:
```
% time ./pidigits 2000 > /dev/null
real 0m7.695s
user 0m7.690s
sys 0m0.005s
```
to this:
```
% time ./pidigits 2000 > /dev/null
real 0m0.322s
user 0m0.318s
sys 0m0.004s
```
The previous code was building up a vector by repeatedly making a
fresh copy for each element that was unshifted onto the front,
yielding quadratic running time. This fixes that by building up the
vector in reverse order (pushing elements onto the end) and then
reversing it.
(Another option would be to build up a zero-initialized vector of the
desired length and then installing all of the shifted result elements
into their target index, but this was easier to hack up quickly, and
yields the desired asymptotic improvement. I have been thinking of
adding a `vec::from_fn_rev` to handle this case, maybe I will try that
this weekend.)
Externally loaded libraries are able to do things that cause references
to them to survive past the expansion phase (e.g. creating @-box cycles,
launching a task or storing something in task local data). As such, the
library has to stay loaded for the lifetime of the process.
This patch gets rid of ObsoleteExternModAttributesInParens and
ObsoleteNamedExternModule since the replacement of `extern mod` with
`extern crate` avoids those cases and raises different errors. Both have
been around for at least a version which makes this a good moment to get
rid of them.
This patch adds a new keyword `crate` which is intended to replace mod
in the context of `extern mod` as part of the issue #9880. The patch
doesn't replace all `extern mod` cases since it is necessary to first
push a new snapshot 0.
The implementation could've been less invasive than this. However I
preferred to take this chance to split the `parse_item_foreign_mod`
method and pull the `extern crate` part out of there, hence the new
method `parse_item_foreign_crate`.
This patch replaces all `crate` usage with `krate` before introducing the
new keyword. This ensures that after introducing the keyword, there
won't be any compilation errors.
krate might not be the most expressive substitution for crate but it's a
very close abbreviation for it. `module` was already used in several
places already.
While working on #11363 I stumbled over a couple of ignored tests, that seem to be fixed or invalid.
* src/test/run-pass/issue-3559.rs was fixed in #4726
* src/test/compile-fail/borrowck-call-sendfn.rs was fixed in #2978
* update src/test/compile-fail/issue-5500-1.rs to work with current Rust (I'm not 100% sure if the original condition is tested as mentioned in #5500, but I think so)
* removed src/test/compile-fail/issue-5500.rs because it is tested in
src/test/run-fail/issue-5500.rs (they are the same test cases, I just renamed src/test/run-fail/addr-of-bot.rs to be consistent with the other issue name
* src/test/run-pass/issue-3559.rs was fixed in #4726
* src/test/compile-fail/borrowck-call-sendfn.rs was fixed in #2978
* update src/test/compile-fail/issue-5500-1.rs to work with current Rust
* removed src/test/compile-fail/issue-5500.rs because it is tested in
src/test/run-fail/issue-5500.rs
* src/test/compile-fail/view-items-at-top.rs fixed
* #897 fixed
* compile-fail/issue-6762.rs issue was closed as dup of #6801
* deleted compile-fail/issue-2074.rs because it became irelevant and is
irrelevant #2074, a test covering this was added in
4f92f452bd
Currently, a scheduler will hit epoll() or kqueue() at the end of *every task*.
The reason is that the scheduler will context switch back to the scheduler task,
terminate the previous task, and then return from run_sched_once. In doing so,
the scheduler will poll for any active I/O.
This shows up painfully in benchmarks that have no I/O at all. For example, this
benchmark:
for _ in range(0, 1000000) {
spawn(proc() {});
}
In this benchmark, the scheduler is currently wasting a good chunk of its time
hitting epoll() when there's always active work to be done (run with
RUST_THREADS=1).
This patch uses the previous two commits to alter the scheduler's behavior to
only return from run_sched_once if no work could be found when trying really
really hard. If there is active I/O, this commit will perform the same as
before, falling back to epoll() to check for I/O completion (to not starve I/O
tasks).
In the benchmark above, I got the following numbers:
12.554s on today's master
3.861s with #12172 applied
2.261s with both this and #12172 applied
cc #8341
This is in preparation for running do_work in a loop while there are no active
I/O handles. This changes the do_work and interpret_message_queue methods to
return a triple where the last element is a boolean flag as to whether work was
done or not.
This commit preserves the same behavior as before, it simply re-structures the
code in preparation for future work.
The green scheduler can optimize its runtime based on this by deciding to not go
to sleep in epoll() if there is no active I/O and there is a task to be stolen.
This is implemented for librustuv by keeping a count of the number of tasks
which are currently homed. If a task is homed, and then performs a blocking I/O
operation, the count will be nonzero while the task is blocked. The homing count
is intentionally 0 when there are I/O handles, but no handles currently blocked.
The reason for this is that epoll() would only be used to wake up the scheduler
anyway.
The crux of this change was to have a `HomingMissile` contain a mutable borrowed
reference back to the `HomeHandle`. The rest of the change was just dealing with
this fallout. This reference is used to decrement the homed handle count in a
HomingMissile's destructor.
Also note that the count maintained is not atomic because all of its
increments/decrements/reads are all on the same I/O thread.
This adopts the rules posted in #10432:
1. If a seek position is negative, then an error is generated
2. Seeks beyond the end-of-file are allowed. Future writes will fill the gap
with data and future reads will return errors.
3. Seeks within the bounds of a file are fine.
Closes#10432
Cleans up a few issues with `fourcc`:
* Corrects the endianness in the docs example
* Removes `#[cfg(not(test))]` (bors might not build this on Windows. If the build fails, I'll re-add it)
* Adds a FIXME referencing the LLVM assert issue we encountered with bors builds on Windows (Same error as #10872)
Loadable syntax extensions don't work when cross compiling (see #12102), so the
fourcc tests all need to be ignored. They're valuable tests, so they shouldn't
be outright ignored, so they're now flagged with ignore-cross-compile
This adopts the rules posted in #10432:
1. If a seek position is negative, then an error is generated
2. Seeks beyond the end-of-file are allowed. Future writes will fill the gap
with data and future reads will return errors.
3. Seeks within the bounds of a file are fine.
Closes#10432
The user-facing API-level change of this commit is that `SharedChan` is gone and `Chan` now has `clone`. The major parts of this patch are the internals which have changed.
Channels are now internally upgraded from oneshots to streams to shared channels depending on the use case. I've noticed a 3x improvement in the oneshot case and very little slowdown (if any) in the stream/shared case.
This patch is mostly a reorganization of the `std::comm` module, and the large increase in code is from either dispatching to one of 3 impls or the duplication between the stream/shared impl (because they're not entirely separate).
The `comm` module is now divided into `oneshot`, `stream`, `shared`, and `select` modules. Each module contains the implementation for that flavor of channel (or the select implementation for select).
Some notable parts of this patch
* Upgrades are done through a semi-ad-hoc scheme for oneshots and messages for streams
* Upgrades are processed ASAP and have some interesting interactions with select
* send_deferred is gone because I expect the mutex to land before this
* Some of stream/shared is straight-up duplicated, but I like having the distinction between the two modules
* Select got a little worse, but it's still "basically limping along"
* This lumps in the patch of deallocating the queue backlog on packet drop
* I'll rebase this on top of the "more errors from try_recv" patch once it lands (all the infrastructure is here already)
All in all, this shouldn't be merged until the new mutexes are merged (because send_deferred wasn't implemented).
Closes#11351
This is an attempt to remove some more of Rust's dependencies on libgcc and replace it with LLVM's compiler-rt lib. I've added compiler-rt as a submodule and changed libstd to link with it.
As far as I could verify, after this change, the only symbols still imported by std from libgcc are the stack unwinding functions. Other crates, however, still picked up symbols from libgcc, not from libstd, as I had hoped. So linking definitely requires some work.
I've only tested this on windows, 32-bit linux and android and fully expect it to fail on other platforms. Patches are welcome.
This, the Nth rewrite of channels, is not a rewrite of the core logic behind
channels, but rather their API usage. In the past, we had the distinction
between oneshot, stream, and shared channels, but the most recent rewrite
dropped oneshots in favor of streams and shared channels.
This distinction of stream vs shared has shown that it's not quite what we'd
like either, and this moves the `std::comm` module in the direction of "one
channel to rule them all". There now remains only one Chan and one Port.
This new channel is actually a hybrid oneshot/stream/shared channel under the
hood in order to optimize for the use cases in question. Additionally, this also
reduces the cognitive burden of having to choose between a Chan or a SharedChan
in an API.
My simple benchmarks show no reduction in efficiency over the existing channels
today, and a 3x improvement in the oneshot case. I sadly don't have a
pre-last-rewrite compiler to test out the old old oneshots, but I would imagine
that the performance is comparable, but slightly slower (due to atomic reference
counting).
This commit also brings the bonus bugfix to channels that the pending queue of
messages are all dropped when a Port disappears rather then when both the Port
and the Chan disappear.
Beforehand, using a concurrent queue always mandated that the "shared state" be
stored internally to the queues in order to provide a safe interface. This isn't
quite as flexible as one would want in some circumstances, so instead this
commit moves the queues to not containing the shared state.
The queues no longer have a "default useful safe" interface, but rather a
"default safe" interface (minus the useful part). The queues have to be shared
manually through an Arc or some other means. This allows them to be a little
more flexible at the cost of a usability hindrance.
I plan on using this new flexibility to upgrade a channel to a shared channel
seamlessly.
I implemented an add method for the btree in progress. It is intended to be refactored later using an alternative to .clone() that passes the borrow checker, but for now, it works as intended. r? @catamorphism
I factored the commits by affected files, for the most part. The last 7 or 8 contain the meat of the PR. The rest are small changes to closures found in the codebase. Maybe interesting to read to see some of the impact of the rules.
r? @pcwalton
Fixes#6801
Loadable syntax extensions don't work when cross compiling (see #12102), so the
fourcc tests all need to be ignored. They're valuable tests, so they shouldn't
be outright ignored, so they're now flagged with ignore-cross-compile
This resolves issue #12157. Does that do it already or is there something else that needs taking care of?
As a side note, there seems to be some documentation, in which the old existence of the do keyword is explained. The list of keywords is not up-to-date either. But these are certainly separate issues.
Resolves issue #12157. `do` is hereby reinstated as a keyword; no syntax is
associated with it though. Along the way, a unit test had to be adapted, since
it was using `do` as a method identifier.
Breaking changes:
- Any code using `do` as an identifier will no longer work.
This is needed for cases where we only need to know if a list item matches the given predicate (eg. in Servo, we need to know if attributes from different DOM elements are equal).
The current comment actually describes *co*-variance.
Fixing this to describe contravariance while keeping 'static in the definition was tricky so just changed to use 'short and 'long.
I found the typo in my attempt to understand the concept of variance itself and the comment confused me. I mention this to point out that I'm new to the concept so may have still got the definition wrong, so please review with care :)
This is a fairly trivial (but IMHO handy) change to implement IterBytes for IpAddr and SocketAddr.
I originally stumbled across this because I wanted to use a SocketAddr as a HashMap key and discovered that I couldn't do it directly. Had to impl IterBytes on a new intermediate type to work around it.
This is needed for cases where we only need to know if a list item
matches the given predicate (eg. in Servo, we need to know if attributes
from different DOM elements are equal).
The previous definition was actually describing covariance.
Fixing to describe contravariance while keeping 'static in the definition was tricky so just changed to use 'short and 'long.
Thinking about swap as an example of unsafe programming. This cleans it up a bit. It also removes type parametrization over `RawPtr` from the memcpy functions to make this compile.
It unsafe assumptions that any impl of RawPtr is for actual pointers,
that they can be copied by memcpy. Removing it is easy, so I don't
think it's solving a real problem.
Repair a rather embarassingly obvious hole that I created as part of #9629. In particular, prevent `&mut` borrows of data in an aliasable location. This used to be prevented through the restrictions mechanism, but in #9629 I modified those rules incorrectly.
r? @pcwalton
Fixes#11913
I was looking into #9303 and was curious if this would still be valuable. @kballard had already done 99% of the work, so I brought the branch up to date and added a feature gate. Any feedback would be appreciated; I wasn't sure if this should be set up as a syntax extension with `#[macro_registrar]`, and if so, where it should be located.
Original PR is here: #9255
TODO:
* [x] Convert to loadable syntax extension
* [x] Default to big endian
* [x] Add `target` identifier
* [x] Expand to include code points 128-255
It was decided that a consistent result across platforms would be the
most useful and least surprising. A "target" option has been added to
get the old behaviour of using the target platform's endianess.
fourcc!() allows you to embed FourCC (or OSType) values that are
evaluated as u32 literals. It takes a 4-byte ASCII string and produces
the u32 resulting in interpreting those 4 bytes as a u32, using either
the platform-native endianness, or explicitly as big or little endian.
I don't know if anything depends on MemReader::fill returning an empty slice instead of EndOfFile, but I'm pretty sure that MemReader::read_until should not go into an infinite loop.
These are ancient. I removed a bunch of questions that are less relevant - or completely unrelevant, updated other entries, and removed things that are already better expressed elsewhere.
Before:
test test::bench_nonpod_nonarena ... bench: 62 ns/iter (+/- 6)
test test::bench_pod_nonarena ... bench: 0 ns/iter (+/- 0)
After:
test test::bench_nonpod_nonarena ... bench: 158 ns/iter (+/- 11)
test test::bench_pod_nonarena ... bench: 48 ns/iter (+/- 2)
The other tests show no change, but are adjusted to use the generic
return value of `.iter` anyway so that this doesn't change in future.
benchmarking.
This allows a result to be marked as "used" by passing it to a function
LLVM cannot see inside. By making `iter` generic and using this
`black_box` on the result benchmarks can get this behaviour simply by
returning their computation.
This pull request tries to fix#12050.
I went after these wrong errors quite aggressively so it might be that I also changed some strings that are not actual errors.
Please point those out and I will update this pull request accordingly.
Error messages cleaned in librustc/middle
Error messages cleaned in libsyntax
Error messages cleaned in libsyntax more agressively
Error messages cleaned in librustc more aggressively
Fixed affected tests
Fixed other failing tests
Last failing tests fixed
The lexer and json were using `transmute(-1): char` as a sentinel value for EOF, which is invalid since `char` is strictly a unicode codepoint.
Fixing this allows for range asserts on chars since they always lie between 0 and 0x10FFFF.
Declare a `type SendStr = MaybeOwned<'static>` to ease readibility of
types that needed the old SendStr behavior.
Implement all the traits for MaybeOwned that SendStr used to implement.
- Convert the formatting traits to `&self` rather than `_: &Self`
- Rejig `syntax::ext::{format,deriving}` a little in preparation
- Implement `#[deriving(Show)]`
The transmute was unsound.
There are many instances of .unwrap_or('\x00') for "ignoring" EOF which
either do not make the situation worse than it was (well, actually make
it better, since it's easy to grep for places that don't handle EOF) or
can never ever be read.
Fixes#8971.
This also drops support for the managed pointer POISON_ON_FREE feature
as it's not worth adding back the support for it. After a snapshot, the
leftovers can be removed.
This pull request:
1) Changes the initial insertion sort to be in-place, and defers allocation of working set until merge is needed.
2) Increases the increases the maximum run length to use insertion sort for from 8 to 32 elements. This increases the size of vectors that will not allocate, and reduces the number of merge passes by two. It seemed to be the sweet spot in the benchmarks that I ran.
Here are the results of some benchmarks. Note that they are sorting u64s, so types that are more expensive to compare or copy may have different behaviors.
Before changes:
```
test vec::bench::sort_random_large bench: 719753 ns/iter (+/- 130173) = 111 MB/s
test vec::bench::sort_random_medium bench: 4726 ns/iter (+/- 742) = 169 MB/s
test vec::bench::sort_random_small bench: 344 ns/iter (+/- 76) = 116 MB/s
test vec::bench::sort_sorted bench: 437244 ns/iter (+/- 70043) = 182 MB/s
```
Deferred allocation (8 element insertion sort):
```
test vec::bench::sort_random_large bench: 702630 ns/iter (+/- 88158) = 113 MB/s
test vec::bench::sort_random_medium bench: 4529 ns/iter (+/- 497) = 176 MB/s
test vec::bench::sort_random_small bench: 185 ns/iter (+/- 49) = 216 MB/s
test vec::bench::sort_sorted bench: 425853 ns/iter (+/- 60907) = 187 MB/s
```
Deferred allocation (16 element insertion sort):
```
test vec::bench::sort_random_large bench: 692783 ns/iter (+/- 165837) = 115 MB/s
test vec::bench::sort_random_medium bench: 4434 ns/iter (+/- 722) = 180 MB/s
test vec::bench::sort_random_small bench: 187 ns/iter (+/- 38) = 213 MB/s
test vec::bench::sort_sorted bench: 393783 ns/iter (+/- 85548) = 203 MB/s
```
Deferred allocation (32 element insertion sort):
```
test vec::bench::sort_random_large bench: 682556 ns/iter (+/- 131008) = 117 MB/s
test vec::bench::sort_random_medium bench: 4370 ns/iter (+/- 1369) = 183 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 32) = 223 MB/s
test vec::bench::sort_sorted bench: 358353 ns/iter (+/- 65423) = 223 MB/s
```
Deferred allocation (64 element insertion sort):
```
test vec::bench::sort_random_large bench: 712040 ns/iter (+/- 132454) = 112 MB/s
test vec::bench::sort_random_medium bench: 4425 ns/iter (+/- 784) = 180 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 81) = 223 MB/s
test vec::bench::sort_sorted bench: 317812 ns/iter (+/- 62675) = 251 MB/s
```
This is the best I could manage with the basic merge sort while keeping the invariant that the original vector must contain each element exactly once when the comparison function is called. If one is not married to a stable sort, an in-place n*log(n) sorting algorithm may have better performance in some cases.
for #12011
cc @huonw
Added a seperate in-place insertion sort for short vectors.
Increased threshold for insertion short for 8 to 32 elements
for small types and 16 for larger types. Added benchmarks
for sorting larger types.
This PR extends the tidy formatting check to rust files in the test folder. To facilitate this, a few flags were added to tidy:
* `xfail-tidy-cr` - Disables the check for CR characters for all following lines in the file
* `xfail-tidy-tab` - Disables the check for tab characters for all following lines in the file
* `xfail-tidy-linelength` - Disables the line length check for all following lines in the file
Checks should not have to be disabled often. I disabled line length checks in `debug-info` tests that use `debugger:` checks, but aside from that, there were relatively few exclusions. Running tidy on all the tests does slow down the formatting check, so it may be worth investigating further optimization.
cc #4534
`from_utf8_lossy()` takes a byte vector and produces a `~str`, converting
any invalid UTF-8 sequence into the U+FFFD REPLACEMENT CHARACTER.
The replacement follows the guidelines in §5.22 Best Practice for U+FFFD
Substitution from the Unicode Standard (Version 6.2)[1], which also
matches the WHATWG rules for utf-8 decoding[2].
[1]: http://www.unicode.org/versions/Unicode6.2.0/ch05.pdf
[2]: http://encoding.spec.whatwg.org/#utf-8Closes#9516.
Part of #8784
Changes:
- Everything labeled under collections in libextra has been moved into a new crate 'libcollection'.
- Renamed container.rs to deque.rs, since it was no longer 'container traits for extra', just a deque trait.
- Crates that depend on the collections have been updated and dependencies sorted.
- I think I changed all the imports in the tests to make sure it works. I'm not entirely sure, as near the end of the tests there was yet another `use` that I forgot to change, and when I went to try again, it started rebuilding everything, which I don't currently have time for.
There will probably be incompatibility between this and the other pull requests that are splitting up libextra. I'm happy to rebase once those have been merged.
The tests I didn't get to run should pass. But I can redo them another time if they don't.
from_utf8_lossy() takes a byte vector and produces a ~str, converting
any invalid UTF-8 sequence into the U+FFFD REPLACEMENT CHARACTER.
The replacement follows the guidelines in §5.22 Best Practice for U+FFFD
Substitution from the Unicode Standard (Version 6.2)[1], which also
matches the WHATWG rules for utf-8 decoding[2].
[1]: http://www.unicode.org/versions/Unicode6.2.0/ch05.pdf
[2]: http://encoding.spec.whatwg.org/#utf-8
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all options
stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the directory of
the crate file.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all
options stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the current
directory of the process.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
A weak pointer inside itself will have its destructor run when the last
strong pointer to that data disappears, so we need to make sure that the
Weak and Rc destructors don't duplicate work (i.e. freeing).
By making the Rcs effectively take a weak pointer, we ensure that no
Weak destructor will free the pointer while still ensuring that Weak
pointers can't be upgraded to strong ones as the destructors run.
This approach of starting weak at 1 is what libstdc++ does.
Fixes#12046.
I have a hunch this just deadlocked the windows bots. Due to UDP being a lossy
protocol, I don't think we can guarantee that the server can receive both
packets, so just listen for one of them.
I have a hunch this just deadlocked the windows bots. Due to UDP being a lossy
protocol, I don't think we can guarantee that the server can receive both
packets, so just listen for one of them.
A weak pointer inside itself will have its destructor run when the last
strong pointer to that data disappears, so we need to make sure that the
Weak and Rc destructors don't duplicate work (i.e. freeing).
By making the Rcs effectively take a weak pointer, we ensure that no
Weak destructor will free the pointer while still ensuring that Weak
pointers can't be upgraded to strong ones as the destructors run.
This approach of starting weak at 1 is what libstdc++ does.
Fixes#12046.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
- `extra::json` didn't make the cut, because of `extra::json` required
dep on `extra::TreeMap`. If/when `extra::TreeMap` moves out of `extra`,
then `extra::json` could move into `serialize`
- `libextra`, `libsyntax` and `librustc` depend on the newly created
`libserialize`
- The extensions to various `extra` types like `DList`, `RingBuf`, `TreeMap`
and `TreeSet` for `Encodable`/`Decodable` were moved into the respective
modules in `extra`
- There is some trickery, evident in `src/libextra/lib.rs` where a stub
of `extra::serialize` is set up (in `src/libextra/serialize.rs`) for
use in the stage0 build, where the snapshot rustc is still making
deriving for `Encodable` and `Decodable` point at extra. Big props to
@huonw for help working out the re-export solution for this
extra: inline extra::serialize stub
fix stuff clobbered in rebase + don't reexport serialize::serialize
no more globs in libserialize
syntax: fix import of libserialize traits
librustc: fix bad imports in encoder/decoder
add serialize dep to librustdoc
fix failing run-pass tests w/ serialize dep
adjust uuid dep
more rebase de-clobbering for libserialize
fixing tests, pushing libextra dep into cfg(test)
fix doc code in extra::json
adjust index.md links to serialize and uuid library
This time everything should be okay, No break due to a failed merge or rebase...
Sorry for the abuse of pull request.
So this move extra::sync, extra::arc, extra::future, extra::comm and extra::task_pool to libsync.