This implements parts of the changes to `Result` and `Option` I proposed and discussed in this thread: https://mail.mozilla.org/pipermail/rust-dev/2013-November/006254.html
This PR includes:
- Adding `ok()` and `err()` option adapters for both `Result` variants.
- Removing `get_ref`, `expect` and iterator constructors for `Result`, as they are reachable with the variant adapters.
- Removing `Result`s `ToStr` bound on the error type because of composability issues. (See https://mail.mozilla.org/pipermail/rust-dev/2013-November/006283.html)
- Some warning cleanups
In order to keep up to date with changes to the libraries that `llvm-config`
spits out, the dependencies to the LLVM are a dynamically generated rust file.
This file is now automatically updated whenever LLVM is updated to get kept
up-to-date.
At the same time, this cleans out some old cruft which isn't necessary in the
makefiles in terms of dependencies.
Closes#10745Closes#10744
It's useful to allow users to get at the internal std::rc::comm::Port,
and other such fields, since they implement important traits like
Select.
See [rust-dev] "select on std::comm::Port and different types" at https://mail.mozilla.org/pipermail/rust-dev/2013-November/006735.html for background.
Right now, as pointed out in #8132, it is very easy to introduce a subtle race
in the runtime. I believe that this is the cause of the current flakiness on the
bots.
I have taken the last idea mentioned in that issue which is to use a lock around
descheduling and context switching in order to solve this race.
Closes#8132
Right now, as pointed out in #8132, it is very easy to introduce a subtle race
in the runtime. I believe that this is the cause of the current flakiness on the
bots.
I have taken the last idea mentioned in that issue which is to use a lock around
descheduling and context switching in order to solve this race.
Closes#8132
This reverts commit c54427ddfb.
Leave the #[ignores] in that were added to rustpkg tests.
Conflicts:
src/librustc/driver/driver.rs
src/librustc/metadata/creader.rs
The `integer_decode()` function decodes a float (f32/f64)
into integers containing the mantissa, exponent and sign.
It's needed for `rationalize()` implementation of #9838.
The code got ported from ABCL [1].
[1] http://abcl.org/trac/browser/trunk/abcl/src/org/armedbear/lisp/FloatFunctions.java?rev=14465#L94
I got the permission to use this code for Rust from Peter Graves (the ABCL copyright holder) . If there's any further IP clearance needed, let me know.
This function had type &[u8] -> ~str, i.e. it allocates a string
internally, even though the non-allocating version that take &[u8] ->
&str and ~[u8] -> ~str are all that is necessary in most circumstances.
This registers new snapshots after the landing of #10528, and then goes on to tweak the build process to build a monolithic `rustc` binary for use in future snapshots. This mainly involved dropping the dynamic dependency on `librustllvm`, so that's now built as a static library (with a dynamically generated rust file listing LLVM dependencies).
This currently doesn't actually make the snapshot any smaller (24MB => 23MB), but I noticed that the executable has 11MB of metadata so once progress is made on #10740 we should have a much smaller snapshot.
There's not really a super-compelling reason to distribute just a binary because we have all the infrastructure for dealing with a directory structure, but to me it seems "more correct" that a snapshot compiler is just a `rustc` binary.
This method is the mutable version of ImmutableVector::split. It is
a DoubleEndedIterator, making mut_rsplit irrelevent. The size_hint
method is not optimal because of #9629.
At the same time, clarify *split* iterator doc.
mut_chunks() returns an iterator that produces mutable slices. This is the mutable version of the existing chunks() method on the ImmutableVector trait.
EDIT: This uses only safe code now.
PREVIOUSLY:
I tried to get this working with safe code only, but I couldn't figure out how to make that work. Before #8624, the exact same code worked without the need for the transmute() call. With that fix and without the transmute() call, the compiler complains about the call to mut_slice(). I think the issue is that the mutable slice that is returned will live longer than the self parameter since the self parameter doesn't have an explicit lifetime. However, that is the way that the Iterator trait defines the next() method. I'm sure there is a good reason for that, although I don't quite understand why. Anyway, I think the interface is safe, since the MutChunkIter will only hand out non-overlapping pointers and there is no way to get it to hand out the same pointer twice.
BufferedWriter::inner flushes before returning the underlying writer.
BufferedWriter::write no longer flushes the underlying writer.
LineBufferedWriter::write flushes up to the *last* newline in the input
string, not the first.
mut_chunks() returns an iterator that produces mutable slices. This is the
mutable version of the existing chunks() method on the ImmutableVector trait.
In this series of commits, I've implemented static linking for rust. The scheme I implemented was the same as my [mailing list post](https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html).
The commits have more details to the nitty gritty of what went on. I've rebased this on top of my native mutex pull request (#10479), but I imagine that it will land before this lands, I just wanted to pre-emptively get all the rebase conflicts out of the way (becuase this is reorganizing building librustrt as well).
Some contentious points I want to make sure are all good:
* I've added more "compiler chooses a default" behavior than I would like, I want to make sure that this is all very clearly outlined in the code, and if not I would like to remove behavior or make it clearer.
* I want to make sure that the new "fancy suite" tests are ok (using make/python instead of another rust crate)
If we do indeed pursue this, I would be more than willing to write up a document describing how linking in rust works. I believe that this behavior should be very understandable, and the compiler should never hinder someone just because linking is a little fuzzy.
BufferedWriter::inner flushes before returning the underlying writer.
BufferedWriter::write no longer flushes the underlying writer.
LineBufferedWriter::write flushes up to the *last* newline in the input
string, not the first.
This commit alters the build process of the compiler to build a static
librustrt.a instead of a dynamic version. This means that we can stop
distributing librustrt as well as default linking against it in the compiler.
This also means that if you attempt to build rust code without libstd, it will
no longer work if there are any landing pads in play. The reason for this is
that LLVM and rustc will emit calls to the various upcalls in librustrt used to
manage exception handling. In theory we could split librustrt into librustrt and
librustupcall. We would then distribute librustupcall and link to it for all
programs using landing pads, but I would rather see just one librustrt artifact
and simplify the build process.
The major benefit of doing this is that building a static rust library for use
in embedded situations all of a sudden just became a whole lot more feasible.
Closes#3361
This commit implements the support necessary for generating both intermediate
and result static rust libraries. This is an implementation of my thoughts in
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html.
When compiling a library, we still retain the "lib" option, although now there
are "rlib", "staticlib", and "dylib" as options for crate_type (and these are
stackable). The idea of "lib" is to generate the "compiler default" instead of
having too choose (although all are interchangeable). For now I have left the
"complier default" to be a dynamic library for size reasons.
Of the rust libraries, lib{std,extra,rustuv} will bootstrap with an
rlib/dylib pair, but lib{rustc,syntax,rustdoc,rustpkg} will only be built as a
dynamic object. I chose this for size reasons, but also because you're probably
not going to be embedding the rustc compiler anywhere any time soon.
Other than the options outlined above, there are a few defaults/preferences that
are now opinionated in the compiler:
* If both a .dylib and .rlib are found for a rust library, the compiler will
prefer the .rlib variant. This is overridable via the -Z prefer-dynamic option
* If generating a "lib", the compiler will generate a dynamic library. This is
overridable by explicitly saying what flavor you'd like (rlib, staticlib,
dylib).
* If no options are passed to the command line, and no crate_type is found in
the destination crate, then an executable is generated
With this change, you can successfully build a rust program with 0 dynamic
dependencies on rust libraries. There is still a dynamic dependency on
librustrt, but I plan on removing that in a subsequent commit.
This change includes no tests just yet. Our current testing
infrastructure/harnesses aren't very amenable to doing flavorful things with
linking, so I'm planning on adding a new mode of testing which I believe belongs
as a separate commit.
Closes#552
- Removed module reexport workaround for the integer module macros
- Removed legacy reexports of `cmp::{min, max}` in the integer module macros
- Combined a few macros in `vec` into one
- Documented a few issues
This adds an implementation of the Chase-Lev work-stealing deque to libstd
under std::rt::deque. I've been unable to break the implementation of the deque
itself, and it's not super highly optimized just yet (everything uses a SeqCst
memory ordering).
The major snag in implementing the chase-lev deque is that the buffers used to
store data internally cannot get deallocated back to the OS. In the meantime, a
shared buffer pool (synchronized by a normal mutex) is used to
deallocate/allocate buffers from. This is done in hope of not overcommitting too
much memory. It is in theory possible to eventually free the buffers, but one
must be very careful in doing so.
I was unable to get some good numbers from src/test/bench tests (I don't think
many of them are slamming the work queue that much), but I was able to get some
good numbers from one of my own tests. In a recent rewrite of select::select(),
I found that my implementation was incredibly slow due to contention on the
shared work queue. Upon switching to the parallel deque, I saw the contention
drop to 0 and the runtime go from 1.6s to 0.9s with the most amount of time
spent in libuv awakening the schedulers (plus allocations).
Closes#4877