Some code cleanup, sorting of import blocks
Removed std::unstable::UnsafeArc's use of Either
Added run-fail tests for the new FailWithCause impls
Changed future_result and try to return Result<(), ~Any>.
- Internally, there is an enum of possible fail messages passend around.
- In case of linked failure or a string message, the ~Any gets
lazyly allocated in future_results recv method.
- For that, future result now returns a wrapper around a Port.
- Moved and renamed task::TaskResult into rt::task::UnwindResult
and made it an internal enum.
- Introduced a replacement typedef `type TaskResult = Result<(), ~Any>`.
I'm not entirely sure why this is happening, but the server task is never seeing
the second send of the client task, and this test will very reliably fail to
complete on windows.
It was pretty much a miracle that these tests were ever passing. They would
never have passed in the single threaded case because only one sigint in the
tests is ever generated, but when run in parallel two sigints will be generated.
This drops more of the old C++ runtime to rather be written in rust. A few
features were lost along the way, but hopefully not too many. The main loss is
that there are no longer backtraces associated with allocations (rust doesn't
have a way of acquiring those just yet). Other than that though, I believe that
the rest of the debugging utilities made their way over into rust.
Closes#8704
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------|---------------|--------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------------------------------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
It's not guaranteed that there will always be an event loop to run, and this
implementation will serve as an incredibly basic one which does not provide any
I/O, but allows the scheduler to still run.
cc #9128
This is a peculiar function to require event loops to implement, and it's only
used in one spot during tests right now. Instead, a possibly more robust apis
for timers should be used rather than requiring all event loops to implement a
curious-looking function.
The PausibleIdleCallback must have some handle into the event loop, and because
struct destructors are run in order of top-to-bottom in order of fields, this
meant that the event loop was getting destroyed before the idle callback was
getting destroyed.
I can't confirm that this fixes a problem in how we use libuv, but it does
semantically fix a problem for usage with other event loops.
This adds constructors to pipe streams in the new runtime to take ownership of
file descriptors, and also fixes a few tests relating to the std::run changes
(new errors are raised on io_error and one test is xfail'd).
I was seeing a lot of weird behavior with stdin behaving as a tty, and it
doesn't really quite make sense, so instead this moves to using libuv's pipes
instead (which make more sense for stdin specifically).
This prevents piping input to rustc hanging forever.
The general idea is to remove conditions completely from I/O, so in the meantime
remove the read_error condition to mean the same thing as the io_error condition.
The isn't an ideal patch, and the comment why is in the code. Basically uvio
uses task::unkillable which touches the kill flag for a task, and if the task is
failing due to mismangement of the kill flag, then there will be serious
problems when the task tries to print that it's failing.
When uv's TTY I/O is used for the stdio streams, the file descriptors are put
into a non-blocking mode. This means that other concurrent writes to the same
stream can fail with EAGAIN or EWOULDBLOCK. By all I/O to event-loop I/O, we
avoid this error.
There is one location which cannot move, which is the runtime's dumb_println
function. This was implemented to handle the EAGAIN and EWOULDBLOCK errors and
simply retry again and again.
This involved changing a fair amount of code, rooted in how we access the local
IoFactory instance. I added a helper method to the rtio module to access the
optional local IoFactory. This is different than before in which it was assumed
that a local IoFactory was *always* present. Now, a separate io_error is raised
when an IoFactory is not present, yet I/O is requested.
This removes the PathLike trait associated with this "support module". This is
yet another "container of bytes" trait, so I didn't want to duplicate what
already exists throughout libstd. In actuality, we're going to pass of C strings
to the libuv APIs, so instead the arguments are now bound with the 'ToCStr'
trait instead.
Additionally, a layer of complexity was removed by immediately converting these
type-generic parameters into CStrings to get handed off to libuv apis.
We get a little more functionality from libuv for these kinds of streams (things
like terminal dimentions), and it also appears to more gracefully handle the
stream being a window. Beforehand, if you used stdio and hit CTRL+d on a
process, libuv would continually return 0-length successful reads instead of
interpreting that the stream was closed.
I was hoping to be able to write tests for this, but currently the testing
infrastructure doesn't allow tests with a stdin and a stdout, but this has been
manually tested! (not that it means much)
- Adds the `Sample` and `IndependentSample` traits for generating numbers where there are parameters (e.g. a list of elements to draw from, or the mean/variance of a normal distribution). The former takes `&mut self` and the latter takes `&self` (this is the only difference).
- Adds proper `Normal` and `Exp`-onential distributions
- Adds `Range` which generates `[lo, hi)` generically & properly (via a new trait) replacing the incorrect behaviour of `Rng.gen_integer_range` (this has become `Rng.gen_range` for convenience, it's far more efficient to use `Range` itself)
- Move the `Weighted` struct from `std::rand` to `std::rand::distributions` & improve it
- optimisations and docs
I'm planning on doing more updates, but the section in the tutorial stood out at me since the 'rust' tool no longer exists, this should probably be removed to lessen confusion.
This reifies the computations required for uniformity done by
(the old) `Rng.gen_integer_range` (now Rng.gen_range), so that they can
be amortised over many invocations, if it is called in a loop.
Also, it makes it correct, but using a trait + impls for each type,
rather than trying to coerce `Int` + `u64` to do the right thing. This
also makes it more extensible, e.g. big integers could & should
implement SampleRange.
This commit re-introduces the functionality of __morestack in a way that it was
not originally anticipated. Rust does not currently have segmented stacks,
rather just large stack segments. We do not detect when these stack segments are
overrun currently, but this commit leverages __morestack in order to check this.
This commit purges a lot of the old __morestack and stack limit C++
functionality, migrating the necessary chunks to rust. The stack limit is now
entirely maintained in rust, and the "main logic bits" of __morestack are now
also implemented in rust as well.
I put my best effort into validating that this currently builds and runs successfully on osx and linux 32/64 bit, but I was unable to get this working on windows. We never did have unwinding through __morestack frames, and although I tried poking at it for a bit, I was unable to understand why we don't get unwinding right now.
A focus of this commit is to implement as much of the logic in rust as possible. This involved some liberal usage of `no_split_stack` in various locations, along with some use of the `asm!` macro (scary). I modified a bit of C++ to stop calling `record_sp_limit` because this is no longer defined in C++, rather in rust.
Another consequence of this commit is that `thread_local_storage::{get, set}` must both be flagged with `#[rust_stack]`. I've briefly looked at the implementations on osx/linux/windows to ensure that they're pretty small stacks, and I'm pretty sure that they're definitely less than 20K stacks, so we probably don't have a lot to worry about.
Other things worthy of note:
* The default stack size is now 4MB instead of 2MB. This is so that when we request 2MB to call a C function you don't immediately overflow because you have consumed any stack at all.
* `asm!` is actually pretty cool, maybe we could actually define context switching with it?
* I wanted to add links to the internet about all this jazz of storing information in TLS, but I was only able to find a link for the windows implementation. Otherwise my suggestion is just "disassemble on that arch and see what happens"
* I put my best effort forward on arm/mips to tweak __morestack correctly, we have no ability to test this so an extra set of eyes would be useful on these spots.
* This is all really tricky stuff, so I tried to put as many comments as I thought were necessary, but if anything is still unclear (or I completely forgot to take something into account), I'm willing to write more!
This commit resumes management of the stack boundaries and limits when switching
between tasks. This additionally leverages the __morestack function to run code
on "stack overflow". The current behavior is to abort the process, but this is
probably not the best behavior in the long term (for deails, see the comment I
wrote up in the stack exhaustion routine).
Rewrite the entire `std::path` module from scratch.
`PosixPath` is now based on `~[u8]`, which fixes#7225.
Unnecessary allocation has been eliminated.
There are a lot of clients of `Path` that still assume utf-8 paths.
This is covered in #9639.
...al work
This is causing really awful scheduler behavior where the main thread scheduler is
continually waking up, stealing work, discovering it can't actually run the work,
and sending it off to another scheduler.
No test cases because we don't have suitable instrumentation for it.
Add a new trait BytesContainer that is implemented for both byte vectors
and strings.
Convert Path::from_vec and ::from_str to one function, Path::new().
Remove all the _str-suffixed mutation methods (push, join, with_*,
set_*) and modify the non-suffixed versions to use BytesContainer.
Remove the old path.
Rename path2 to path.
Update all clients for the new path.
Also make some miscellaneous changes to the Path APIs to help the
adoption process.
This is causing really awful scheduler behavior where the main thread scheduler is
continually waking up, stealing work, discovering it can't actually run the work,
and sending it off to another scheduler.
This patch removes the code responsible for handling older CrateMap versions (as discussed during #9593). Only the new (safer) layout is supported now.
This implements a number of the baby steps needed to start eliminating everything inside of `std::io`. It turns out that there are a *lot* of users of that module, so I'm going to try to tackle them separately instead of bringing down the whole system all at once.
This pull implements a large amount of unimplemented functionality inside of `std::rt::io` including:
* Native file I/O (file descriptors, *FILE)
* Native stdio (through the native file descriptors)
* Native processes (extracted from `std::run`)
I also found that there are a number of users of `std::io` which desire to read an input line-by-line, so I added an implementation of `read_until` and `read_line` to `BufferedReader`.
With all of these changes in place, I started to axe various usages of `std::io`. There's a lot of one-off uses here-and-there, but the major use-case remaining that doesn't have a fantastic solution is `extra::json`. I ran into a few compiler bugs when attempting to remove that, so I figured I'd come back to it later instead.
There is one fairly major change in this pull, and it's moving from native stdio to uv stdio via `print` and `println`. Unfortunately logging still goes through native I/O (via `dumb_println`). This is going to need some thinking, because I still want the goal of logging/printing to be 0 allocations, and this is not possible if `io::stdio::stderr()` is called on each log message. Instead I think that this may need to be cached as the `logger` field inside the `Task` struct, but that will require a little more workings to get right (this is also a similar problem for print/println, do we cache `stdout()` to not have to re-create it every time?).
This changes an `assert_once_ever!` assertion to just a plain old assertion
around an atomic boolean to ensure that one particular runtime doesn't attempt
to exit twice.
Closes#9739
This changes an `assert_once_ever!` assertion to just a plain old assertion
around an atomic boolean to ensure that one particular runtime doesn't attempt
to exit twice.
Closes#9739
This lets the C++ code in the rt handle the (slightly) tricky parts of
random number generation: e.g. error detection/handling, and using the
values of the `#define`d options to the various functions.
This provides 2 methods: .reseed() and ::from_seed that modify and
create respecitively.
Implement this trait for the RNGs in the stdlib for which this makes
sense.
The former reads from e.g. /dev/urandom, the latter just wraps any
std::rt::io::Reader into an interface that implements Rng.
This also adds Rng.fill_bytes for efficient implementations of the above
(reading 8 bytes at a time is inefficient when you can read 1000), and
removes the dependence on src/rt (i.e. rand_gen_seed) although this last
one requires implementing hand-seeding of the XorShiftRng used in the
scheduler on Linux/unixes, since OSRng relies on a scheduler existing to
be able to read from /dev/urandom.
This is 2x faster on 64-bit computers at generating anything larger
than 32-bits.
It has been verified against the canonical C implementation from the
website of the creator of ISAAC64.
Also, move `Rng.next` to `Rng.next_u32` and add `Rng.next_u64` to
take full advantage of the wider word width; otherwise Isaac64 will
always be squeezed down into a u32 wasting half the entropy and
offering no advantage over the 32-bit variant.
This commit fixes all of the fallout of the previous commit which is an attempt
to refine privacy. There were a few unfortunate leaks which now must be plugged,
and the most horrible one is the current `shouldnt_be_public` module now inside
`std::rt`. I think that this either needs a slight reorganization of the
runtime, or otherwise it needs to just wait for the external users of these
modules to get replaced with their `rt` implementations.
Other fixes involve making things pub which should be pub, and otherwise
updating error messages that now reference privacy instead of referencing an
"unresolved name" (yay!).
This pull request changes to memory layout of the `CrateMap` struct to use static slices instead of raw pointers. Most of the discussion took place [here](63b5975efa (L1R92)) .
The memory layout of CrateMap changed, without bumping the version number in the struct. Another, more backward compatible, solution would be to keep the old code and increase the version number in the new struct. On the other hand, the `annihilate_fn` pointer was removed without bumping the version number recently.
At the moment, the stage0 compiler does not use the new memory layout, which would lead the segfaults during stage0 compilation, so I've added a dummy `iter_crate_map` function for stage0, which does nothing. Again, this could be avoided if we'd bump the version number in the struct and keep the old code.
I'd like to use a normal `for` loop [here](https://github.com/fhahn/rust/compare/logging-unsafe-removal?expand=1#L1R109),
for child in children.iter() {
do_iter_crate_map(child, |x| f(x), visited);
}
but for some reason this only yields `error: unresolved enum variant, struct or const 'Some'` and I have no idea why.
This adds a large doc-block to the top of the std::logging module explaining how
to use it. This is mostly just making sure that all the information in the
manual's section about logging is also here (in case someone decides to look
into this module first).
This also removes the old console_{on,off} methods. As far as I can tell, the
functions were only used by the compiler, and there's no reason for them to be
used because they're all turned off by default anyway (maybe they were turned on
by default at some point...)
I believe that this is the final nail in the coffin and closes#5021
This lifts various restrictions on the runtime, for example the character limit
when logging a message. Right now the old debug!-style macros still involve
allocating (because they use fmt! syntax), but the new debug2! macros don't
involve allocating at all (unless the formatter for a type requires allocation.
This also includes a fix for yielding from single-threaded schedulers where the scheduler would stop working before its work queue was empty. Fixes the deadlocks that this patch had previously.
If there's no TLS key just yet, then there's nothing to unsafely borrow, so
continue returning None. This prevents causing the runtime to abort itself when
logging before the runtime is fully initialized.
Closes#9487
r? @brson
This lifts various restrictions on the runtime, for example the character limit
when logging a message. Right now the old debug!-style macros still involve
allocating (because they use fmt! syntax), but the new debug2! macros don't
involve allocating at all (unless the formatter for a type requires allocation.
If there's no TLS key just yet, then there's nothing to unsafely borrow, so
continue returning None. This prevents causing the runtime to abort itself when
logging before the runtime is fully initialized.
Closes#9487
Progress on #7981
This doesn't completely close the issue because `struct A;` is still allowed, and it's a much larger change to disallow that. I'm also not entirely sure that we want to disallow that. Regardless, punting that discussion to the issue instead.
Also, documentation & general clean-up:
- remove `gen_char_from`: better served by `sample` or `choose`.
- `gen_bytes` generalised to `gen_vec`.
- `gen_int_range`/`gen_uint_range` merged into `gen_integer_range` and
made to be properly uniformly distributed. Fixes#8644.
Minor adjustments to other functions.
This large commit implements and `html` output option for rustdoc_ng. The
executable has been altered to be invoked as "rustdoc_ng html <crate>" and
it will dump everything into the local "doc" directory. JSON can still be
generated by changing 'html' to 'json'.
This also fixes a number of bugs in rustdoc_ng relating to comment stripping,
along with some other various issues that I found along the way.
The `make doc` command has been altered to generate the new documentation into
the `doc/ng/$(CRATE)` directories.
This is the second of two parts of #8991, now possible as a new snapshot
has been made. (The first part implemented the unreachable!() macro; it
was #8992, 6b7b8f2682.)
``std::util::unreachable()`` is removed summarily; any code which used
it should now use the ``unreachable!()`` macro.
Closes#9312.
Closes#8991.
This is the second of two parts of #8991, now possible as a new snapshot
has been made. (The first part implemented the unreachable!() macro; it
was #8992, 6b7b8f2682.)
``std::util::unreachable()`` is removed summarily; any code which used
it should now use the ``unreachable!()`` macro.
Closes#9312.
Closes#8991.
This is a re-landing of #8645, except that the bindings are *not* being used to
power std::run just yet. Instead, this adds the bindings as standalone bindings
inside the rt::io::process module.
I made one major change from before, having to do with how pipes are
created/bound. It's much clearer now when you can read/write to a pipe, as
there's an explicit difference (different types) between an unbound and a bound
pipe. The process configuration now takes unbound pipes (and consumes ownership
of them), and will return corresponding pipe structures back if spawning is
successful (otherwise everything is destroyed normally).
A quick rundown:
- added `file::{readdir, stat, mkdir, rmdir}`
- Added access-constrained versions of `FileStream`; `FileReader` and `FileWriter` respectively
- big rework in `uv::file` .. most actions are by-val-self methods on `FsRequest`; `FileDescriptor` has gone the way of the dinosaurs
- playing nice w/ homing IO (I just copied ecr's work, hehe), etc
- added `FileInfo` trait, with an impl for `Path`
- wrapper for file-specific actions, with the file path always implied by self's value
- has the means to create `FileReader` & `FileWriter` (this isn't exposed in the top-level free function API)
- has "safe" wrappers for `stat()` that won't throw in the event of non-existence/error (in this case, I mean `is_file` and `exists`)
- actions should fail if done on non-regular-files, as appropriate
- added `DirectoryInfo` trait, with an impl for `Path`
- pretty much ditto above, but for directories
- added `readdir` (!!) to iterate over entries in a dir as a `~[Path]` (this was *brutal* to get working)
...<del>and lots of other stuff</del>not really. Do your worst!
This doesn't close any bugs as the goal is to convert the parameter to by-value, but this is a step towards being able to make guarantees about `&T` pointers (where T is Freeze) to LLVM.
std: remove unneeded field from RequestData struct
std: rt::uv::file - map us_fs_stat & start refactoring calls into FsRequest
std: stubbing out stat calls from the top-down into uvio
std: us_fs_* operations are now by-val self methods on FsRequest
std: post-rebase cleanup
std: add uv_fs_mkdir|rmdir + tests & minor test cleanup in rt::uv::file
WORKING: fleshing out FileStat and FileInfo + tests
std: reverting test files..
refactoring back and cleanup...
Fix uint overflow bugs in std::{at_vec, vec, str}
Closes#8742
Fix issue #8742, which summarized is: unsafe code in vec and str did assume
that a reservation for `X + Y` elements always succeeded, and didn't overflow.
Introduce the method `Vec::reserve_additional(n)` to make it easy to check for
overflow in `Vec::push` and `Vec::push_all`.
In std::str, simplify and remove a lot of the unsafe code and use `push_str`
instead. With improvements to `.push_str` and the new function
`vec::bytes::push_bytes`, it looks like this change has either no or positive
impact on performance.
I believe there are many places still where `v.reserve(A + B)` still can overflow.
This by itself is not an issue unless followed by (unsafe) code that steps aside
boundary checks.
A SendStr is a string that can hold either a ~str or a &'static str.
This can be useful as an optimization when an allocation is sometimes needed but the common case is statically known.
Possible use cases include Maps with both static and owned keys, or propagating error messages across task boundaries.
SendStr implements most basic traits in a way that hides the fact that it is an enum; in particular things like order and equality are only determined by the content of the wrapped strings.
Replaced std::rt:logging::SendableString with SendStr
Added tests for using an SendStr as key in Hash- and Treemaps
Remove these in favor of the two traits themselves and the wrapper
function std::from_str::from_str.
Add the function std::num::from_str_radix in the corresponding role for
the FromStrRadix trait.
Work a bit towards #9157 "Remove Either". These instances don't need to use Either and are better expressed in other ways (removing allocations and simplifying types).
This is a series of patches to modernize option and result. The highlights are:
* rename `.unwrap_or_default(value)` and etc to `.unwrap_or(value)`
* add `.unwrap_or_default()` that uses the `Default` trait
* add `Default` implementations for vecs, HashMap, Option
* add `Option.and(T) -> Option<T>`, `Option.and_then(&fn() -> Option<T>) -> Option<T>`, `Option.or(T) -> Option<T>`, and `Option.or_else(&fn() -> Option<T>) -> Option<T>`
* add `option::ToOption`, `option::IntoOption`, `option::AsOption`, `result::ToResult`, `result::IntoResult`, `result::AsResult`, `either::ToEither`, and `either::IntoEither`, `either::AsEither`
* renamed `Option::chain*` and `Result::chain*` to `and_then` and `or_else` to avoid the eventual collision with `Iterator.chain`.
* Added a bunch of impls of `Default`
* Added a `#[deriving(Default)]` syntax extension
* Removed impls of `Zero` for `Option<T>` and vecs.
The default buffer size is the same as the one in Java's BufferedWriter.
We may want BufferedWriter to have a Drop impl that flushes, but that
isn't possible right now due to #4252/#4430. This would be a bit
awkward due to the possibility of the inner flush failing. For what it's
worth, Java's BufferedReader doesn't have a flushing finalizer, but that
may just be because Java's finalizer support is awful.
Closes#8953
This is a reopening of the libuv-upgrade part of #8645. Hopefully this won't
cause random segfaults all over the place. The windows regression in testing
should also be fixed (it shouldn't build the whole compiler twice).
A notable difference from before is that gyp is now a git submodule instead of
always git-cloned at make time. This allows bundling for releases more easily.
Closes#8850
The trait will keep the `Iterator` naming, but a more concise module
name makes using the free functions less verbose. The module will define
iterables in addition to iterators, as it deals with iteration in
general.
This exposes a very simple function for resolving host names. There's a lot more that needs to be done, but this is probably enough for servo to get started connecting to real websites again.
An iterator that simply calls `.read_bytes()` each iteration.
I think choosing to own the Reader value and implementing Decorator to
allow extracting it is the most generically useful. The Reader type
variable can of course be some kind of reference type that implements
Reader.
In the generic form the `Bytes` iterator is well behaved itself and does not read ahead.
It performs abysmally on top of a FileStream, and much better if a buffering reader is inserted inbetween.
We already do this for libstd tests automatically, and compiletest runs into the
same problems where when forking lots of processes lots of file descriptors are
created. On OSX we can use specific syscalls to raise the limits, in this
situation, though.
Closes#8904
The Listener trait takes two type parameters, the type of connection and the type of Acceptor,
and specifies only one method, listen, which consumes the listener and produces an Acceptor.
The Acceptor trait takes one type parameter, the type of connection, and defines two methods.
The accept() method waits for an incoming connection attempt and returns the result.
The incoming() method creates an iterator over incoming connections and is a default method.
Example:
```rust
let listener = TcpListener.bind(addr); // Bind to a socket
let acceptor = listener.listen(); // Start the listener
for stream in acceptor.incoming() {
// Process incoming connections forever (a failure will kill the task).
}
```
Closes#8689
We already do this for libstd tests automatically, and compiletest runs into the
same problems where when forking lots of processes lots of file descriptors are
created. On OSX we can use specific syscalls to raise the limits, in this
situation, though.
Closes#8904
An iterator that simply calls `.read_bytes()` each iteration.
I think choosing to own the Reader value and implementing Decorator to
allow extracting it is the most generically useful. The Reader type
variable can of course be some kind of reference type that implements
Reader.
The only user-facing change is handling non-integer (and zero) `RUST_THREADS` more nicely:
```
$ RUST_THREADS=x rustc # old
You've met with a terrible fate, haven't you?
fatal runtime error: runtime tls key not initialized
Aborted
$ RUST_THREADS=x ./x86_64-unknown-linux-gnu/stage2/bin/rustc # new
You've met with a terrible fate, haven't you?
fatal runtime error: `RUST_THREADS` is `x`, should be a positive integer
Aborted
```
The other changes are converting some `for .. in range(x,y)` to `vec::from_fn` or `for .. in x.iter()` as appropriate; and removing a chain of (seemingly) unnecessary pointer casts.
(Also, fixes a typo in `extra::test` from #8823.)
This moves all local_data stuff into the `local_data` module and only that
module alone. It also removes a fair amount of "super-unsafe" code in favor of
just vanilla code generated by the compiler at the same time.
Closes#8113