There is currently not much precedent for target crates requiring syntax
extensions to compile their test versions. This dependency is possible, but
can't be encoded through the normal means of DEPS_regex because it is a
test-only dependency and it must be a *host* dependency (it's a syntax
extension).
Closes#13844
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
This allows the use of syntax extensions when cross-compiling (fixing #12102). It does this by encoding the target triple in the crate metadata and checking it when searching for files. Currently the crate triple must match the host triple when there is a macro_registrar_fn, it must match the target triple when linking, and can match either when only macro_rules! macros are used.
due to carelessness, this is pretty much a duplicate of https://github.com/mozilla/rust/pull/13450.
This adds the target triple to the crate metadata.
When searching for a crate the phase (link, syntax) is taken into account.
During link phase only crates matching the target triple are considered.
During syntax phase, either the target or host triple will be accepted, unless
the crate defines a macro_registrar, in which case only the host triple will
match.
Instead of passing through CC which may have things like ccache and other
arguments (when using clang) this commit filters out the necessary arguments
from CC to pass the right linker to rustc.
Closes#13562
This is a bit of an interesting upgrade to LLVM. Upstream LLVM has started using C++11 features, so they require a C++11 compiler to build. I've updated all the bots to have a C++11 compiler, and they appear to be building LLVM successfully:
* Linux bots - I added gcc/g++ 4.7 (good enough)
* Android bots - same as the linux ones
* Mac bots - I installed the most recent command line tools for Lion which gives us clang 3.2, but LLVM wouldn't build unless it was explicitly asked to link to `libc++` instead of `libstdc++`. This involved tweaking `mklldeps.py` and the `configure` script to get things to work out
* Windows bots - mingw-w64 has gcc 4.8.1 which is sufficient for building LLVM (hurray!)
* BSD bots - I updated FreeBSD to 10.0 which brought with it a relevant version of clang.
The largest fallout I've seen so far is that the test suite doesn't work at all on FreeBSD 10. We've already stopped gating on FreeBSD due to #13427 (we used to be on freebsd 9), so I don't think this puts us in too bad of a situation. I will continue to attempt to fix FreeBSD and the breakage on there.
The LLVM update brings with it all of the recently upstreamed LLVM patches. We only have one local patch now which is just an optimization, and isn't required to use upstream LLVM. I want to maintain compatibility with LLVM 3.3 and 3.4 while we can, and this upgrade is keeping us up to date with the 3.5 release. Once 3.5 is release we will in theory no longer require a bundled LLVM.
The goal of the snapshot bots is to produce binaries which can run in as many
locations as possible. Currently we build on Centos 6 for this reason, but with
LLVM's update to C++11, this reduces the number of platforms that we could
possibly run on.
This adds a --enable-llvm-static-stdcpp option to the ./configure script for
Rust which will enable building a librustc with a static dependence on
libstdc++. This normally isn't necessary, but this option can be used on the
snapshot builders in order to continue to make binaries which should be able to
run in as many locations as possible.
In the past, windows was installed from stage3 to guarantee convergence between
the host and target artifacts, but syntax extensions on all platforms are
currently relying on convergence, so special casing this one platform has become
less relevant over time.
This will also have the added benefit of dealing with #13474 and #13491. These
issues will be closed after next next nightly is confirmed to fix them.
This is intended to be the first thing somebody new to the language reads about Rust. It is supposed to be simple and intriguing, to give the user an idea of whether Rust is appropriate for them, and to hint that there's a lot of cool stuff to learn if they just keep diving deeper.
I'm particularly happy with the sequence of concurrency examples.
After removing absolute rpaths, cross compile builds (notably the nightly
builders) broke. This is because the RPATH was pointing at an empty directory
because only the rustc binary is copied over, not all of the target libraries.
This modifies the cross compile logic to fixup the rpath of the stage0
cross-compiled rustc to point to where it came from.
Concerns have been raised about using absolute rpaths in #11746, and this is the
first step towards not relying on rpaths at all. The only current use case for
an absolute rpath is when a non-installed rust builds an executable that then
moves from is built location. The relative rpath back to libstd and absolute
rpath to the installation directory still remain (CFG_PREFIX).
Closes#11746
Rebasing of #12754
First, documented the existing `CTEST_DISABLE_$(TEST_GROUP)` pattern
for conditionally disabling tests based on missing host features.
Added variant of above, `CTEST_DISABLE_NONSELFHOST_$(TEST_GROUP)`,
which is only queried in contexts where the target is not on the
CFG_HOST list (which I interpret as the list of targets that our host
can compatibly emulate; e.g. the example that i686 and x86_64 can in
theory run each others' tests).
Driveby fix: Remove redundant copy of
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec dependency declaration.
These are not installed anywhere, but are included under `./doc` for
those who want an offline copy with their nightlies. This increases the
size of the (compressed) tarball from 76 to 83 MB.
These syntax extensions need a place to be documented, and this starts passing a
`--cfg dox` parameter to `rustdoc` when building and testing documentation in
order to document macros so that they have no effect on the compiled crate, but
only documentation.
Closes#5605
These syntax extensions need a place to be documented, and this starts passing a
`--cfg dox` parameter to `rustdoc` when building and testing documentation in
order to document macros so that they have no effect on the compiled crate, but
only documentation.
Closes#5605
1. Fix a long-standing typo in the makefile: the relevant
CTEST_NAME here is `rpass-full` (with a dash), not
`rpass_full`.
2. The rpass-full tests depend on the complete set of target
libraries. Therefore, the rpass-full tests need to use
the dependencies held in the CSREQ-prefixed variable, not
the TLIBRUSTC_DEFAULT-prefixed variable.
Mac can't actually build our source tarballs because it's `tar`
command doesn't support the --exclude-vcs flag. This is just
a workaround to make our mac nightlies work (we get our source
tarballs from the linux bot).
This performs a few touch-ups to the OSX installer:
* A rust logo is shown during installation
* The installation happens to /usr/local by default (instead of /)
* A new welcome screen is shown that's slightly more relevant
This fixes some problems with
make verify-grammar
llnextgen still reports a lot of errors
FYI: My build directory /my-test/build is different from the source directory /my-test/rust.
cd /my-test/build
/my-test/rust/configure --prefix=/my-test/bin
make
make install
make verify-grammar
This performs a few touch-ups to the OSX installer:
* A rust logo is shown during installation
* The installation happens to /usr/local by default (instead of /)
* A new welcome screen is shown that's slightly more relevant
The previous dependency calculation was based on an arbitrary set of asterisks
at an arbitrary depth, but using the recursive version should be much more
robust in figuring out what's dependent.
When calling
make verify-grammar
rust.md cannot be found, because the
path to rust.md is missing.
The path is set to:
$(D)/rust.md
This can only be tested, when llnextgen is installed.
Signed-off-by: Jan Kobler <eng1@koblersystems.de>
The previous dependency calculation was based on an arbitrary set of asterisks
at an arbitrary depth, but using the recursive version should be much more
robust in figuring out what's dependent.
Closes#13118
This doesn't work quite right yet (we need to build packages for all hosts)
and I'm not ready to turn on new dist artifacts yet, but I want to start doing
dry runs for 0.10, so I'm turning this off for now.
After `make clean` I'm seeing the build break with
```
cp: cannot stat ‘x86_64-unknown-linux-gnu/rt/libbacktrace/.libs/libbacktrace.a’: No such file or directory
```
Deleteing the libbacktrace dir entirely on clean fixes.
This commit switches over the backtrace infrastructure from piggy-backing off
the RUST_LOG environment variable to using the RUST_BACKTRACE environment
variable (logging is now disabled in libstd).
This commit moves all logging out of the standard library into an external
crate. This crate is the new crate which is responsible for all logging macros
and logging implementation. A few reasons for this change are:
* The crate map has always been a bit of a code smell among rust programs. It
has difficulty being loaded on almost all platforms, and it's used almost
exclusively for logging and only logging. Removing the crate map is one of the
end goals of this movement.
* The compiler has a fair bit of special support for logging. It has the
__log_level() expression as well as generating a global word per module
specifying the log level. This is unfairly favoring the built-in logging
system, and is much better done purely in libraries instead of the compiler
itself.
* Initialization of logging is much easier to do if there is no reliance on a
magical crate map being available to set module log levels.
* If the logging library can be written outside of the standard library, there's
no reason that it shouldn't be. It's likely that we're not going to build the
highest quality logging library of all time, so third-party libraries should
be able to provide just as high-quality logging systems as the default one
provided in the rust distribution.
With a migration such as this, the change does not come for free. There are some
subtle changes in the behavior of liblog vs the previous logging macros:
* The core change of this migration is that there is no longer a physical
log-level per module. This concept is still emulated (it is quite useful), but
there is now only a global log level, not a local one. This global log level
is a reflection of the maximum of all log levels specified. The previously
generated logging code looked like:
if specified_level <= __module_log_level() {
println!(...)
}
The newly generated code looks like:
if specified_level <= ::log::LOG_LEVEL {
if ::log::module_enabled(module_path!()) {
println!(...)
}
}
Notably, the first layer of checking is still intended to be "super fast" in
that it's just a load of a global word and a compare. The second layer of
checking is executed to determine if the current module does indeed have
logging turned on.
This means that if any module has a debug log level turned on, all modules
with debug log levels get a little bit slower (they all do more expensive
dynamic checks to determine if they're turned on or not).
Semantically, this migration brings no change in this respect, but
runtime-wise, this will have a perf impact on some code.
* A `RUST_LOG=::help` directive will no longer print out a list of all modules
that can be logged. This is because the crate map will no longer specify the
log levels of all modules, so the list of modules is not known. Additionally,
warnings can no longer be provided if a malformed logging directive was
supplied.
The new "hello world" for logging looks like:
#[phase(syntax, link)]
extern crate log;
fn main() {
debug!("Hello, world!");
}
After `make clean' I'm seeing the build break with
```
cp: cannot stat ‘x86_64-unknown-linux-gnu/rt/libbacktrace/.libs/libbacktrace.a’: No such file or directory
```
Deleteing the libbacktrace dir entirely on clean fixes.
This commit shreds all remnants of libextra from the compiler and standard
distribution. Two modules, c_vec/tempfile, were moved into libstd after some
cleanup, and the other modules were moved to separate crates as seen fit.
Closes#8784Closes#12413Closes#12576
This aims to cover the basics of writing safe unsafe code. At the moment
it is just designed to be a better place for the `asm!()` docs than the
detailed release notes wiki page, and I took the time to write up some
other things.
More examples are needed, especially of things that can subtly go wrong;
and vast areas of `unsafe`-ty aren't covered, e.g. `static mut`s and
thread-safety in general.
This commit shreds all remnants of libextra from the compiler and standard
distribution. Two modules, c_vec/tempfile, were moved into libstd after some
cleanup, and the other modules were moved to separate crates as seen fit.
Closes#8784Closes#12413Closes#12576
This enables the lowering of llvm 64b intrinsics to hardware ops, resolving issues around `__kernel_cmpxchg64` on older kernels on ARM devices, and also enables use of the hardware floating point unit, resolving https://github.com/mozilla/rust/issues/10482.
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
Closes#12803 (std: Relax an assertion in oneshot selection) r=brson
Closes#12818 (green: Fix a scheduler assertion on yielding) r=brson
Closes#12819 (doc: discuss try! in std::io) r=alexcrichton
Closes#12820 (Use generic impls for `Hash`) r=alexcrichton
Closes#12826 (Remove remaining nolink usages) r=alexcrichton
Closes#12835 (Emacs: always jump the cursor if needed on indent) r=brson
Closes#12838 (Json method cleanup) r=alexcrichton
Closes#12843 (rustdoc: whitelist the headers that get a § on hover) r=alexcrichton
Closes#12844 (docs: add two unlisted libraries to the index page) r=pnkfelix
Closes#12846 (Added a test that checks that unary structs can be mutably borrowed) r=sfackler
Closes#12847 (mk: Fix warnings about duplicated rules) r=nmatsakis
This functionality is not super-core and so doesn't need to be included
in std. It's possible that std may need rand (it does a little bit now,
for io::test) in which case the functionality required could be moved to
a secret hidden module and reexposed by librand.
Unfortunately, using #[deprecated] here is hard: there's too much to
mock to make it feasible, since we have to ensure that programs still
typecheck to reach the linting phase.
- remove `node.js` dep., it has no effect as of #12747 (1)
- switch between LaTeX compilers, some cleanups
- CSS: fixup the print stylesheet, refactor highlighting code (2)
(1): `prep.js` outputs its own HTML directives, which `pandoc` cannot recognize when converting the document into LaTeX (this is why the PDF docs have never been highlighted as of now).
Note that if we were to add the `.rust` class to snippets, we could probably use pandoc's native highlighting capatibilities i.e. Kate ([here is](http://adrientetar.github.io/rust-tuts/tutorial/tutorial.pdf) an example of that).
(2): the only real highlighting change is for lifetimes which are now brown instead of red, the rest is just refactor of twos shades of red that look the same.
Also I made numbers highlighting for src in rustdoc a tint more clear so that it is less bothering.
@alexcrichton, @huonw
Closes#9873. Closes#12788.
Work towards #9876.
Several minor things here:
* Fix the `need_ok` function in `configure`
* Install man pages with non-executable permissions
* Use the correct directory for man pages when installing (this was a recent regression)
* Put all distributables in a new `dist/` directory in the build directory (there are soon to be significantly more of these)
Finally, this also creates a new, more precise way to install and uninstall Rust's files, the `install.sh` script, and creates a build target (currently `dist-tar-bins`) that creates a binary tarball containing all the installable files, boilerplate and license docs, and `install.sh`.
This binary tarball is the lowest-common denominator way to install Rust on Unix. We'll use it as the default installer on Linux (OS X will use .pkg).
## How `install.sh` works
* First, the makefiles (`prepare.mk` and `dist.mk`) put all the stuff that needs to be installed in a new directory in `dist/`.
* Then it puts `install.sh` in that same directory and a list of all the files to install at `rustlib/manifest`.
* Then the directory can be packaged and distributed.
* When `install.sh` runs it does some sanity checking then copies everything in the manifest to the install prefix, then copies the manifest as well.
* When `install.sh` runs again in the future it first looks for the existing manifest at the install prefix, and if it exists deletes everything in it. This is how the core distribution is upgraded - cargo is responsible for the rest.
* `install.sh --uninstall` will uninstall Rust
## Future work:
* Modify `install.sh` to accept `--man-dir` etc
* Rewrite `install.mk` to delegate to `install.sh`
* Investigate how `install.sh` does or doesn't work with .pkg on Mac
* Modify `dist.mk` to create `.pkg` files for all hosts
* Possibly use [makeself](http://www.megastep.org/makeself/) to create self-extracting installers
* Modify dist-snap bots run on mac as well, uploading binary tarballs and .pkg files for the four combos of linux, mac, x86, and x86_64.
* Adjust build system to be able to augment versions with '-nightly'
* Adjust build system to name dist artifacts without version numbers e.g. `rust-nightly-...pkg`. This is so we don't leave a huge trail of old nightly binaries on S3 - they just get overwritten.
* Create new dist-nightly builder
* Give the build master a new cron job to push to dist-nightly every night
* Add docs to distributables
* Update README.md to reflect the new reality
* Modernize the website to promote new installers
`prep.js` outputs its own HTML directives, which `pandoc` cannot
recognize when converting the document into LaTeX (this is why the
PDF docs have never been highlighted as of now).
Note that if we were to add the `.rust` class to snippets, we could
probably use pandoc's native highlighting capatibilities i.e. Kate.
This restores the old behaviour (as compared to building PDF versions of
all standalone docs), because some of the guides use unicode characters,
which seems to make pdftex unhappy.
parsing limitations.
Sundown parses
```
~~~
as a valid codeblock (i.e. mismatching delimiters), which made using
rustdoc on its own documentation impossible (since it used nested
codeblocks to demonstrate how testable codesnippets worked).
This modifies those snippets so that they're delimited by indentation,
but this then means they're tested by `rustdoc --test` & rendered as
Rust code (because there's no way to add `notrust` to
indentation-delimited code blocks). A comment is added to stop the
compiler reading the text too closely, but this unfortunately has to be
visible in the final docs, since that's the text on which the
highlighting happens.
E.g. this stops check-...-doc rules for `rustdoc.md` and `librustdoc`
from stamping on each other, so that they are correctly built and
tested. (Previously only the rustdoc crate was tested.)
This converts it to be very similar to crates.mk, with a single list of
the documentation items creating all the necessary bits and pieces.
Changes include:
- rustdoc is used to render HTML & test standalone docs
- documentation building now obeys NO_REBUILD=1
- testing standalone docs now obeys NO_REBUILD=1
- L10N is slightly less broken (in particular, it shares dependencies
and code with the rest of the code)
- PDFs can be built for all documentation items, not just tutorial and
manual
- removes the obsolete & unused extract-tests.py script
- adjust the CSS for standalone docs to use the rustdoc syntax
highlighting
This new SVH is used to uniquely identify all crates as a snapshot in time of
their ABI/API/publicly reachable state. This current calculation is just a hash
of the entire crate's AST. This is obviously incorrect, but it is currently the
reality for today.
This change threads through the new Svh structure which originates from crate
dependencies. The concept of crate id hash is preserved to provide efficient
matching on filenames for crate loading. The inspected hash once crate metadata
is opened has been changed to use the new Svh.
The goal of this hash is to identify when upstream crates have changed but
downstream crates have not been recompiled. This will prevent the def-id drift
problem where upstream crates were recompiled, thereby changing their metadata,
but downstream crates were not recompiled.
In the future this hash can be expanded to exclude contents of the AST like doc
comments, but limitations in the compiler prevent this change from being made at
this time.
Closes#10207
The compiler itself doesn't necessarily need any features of green threading
such as spawning tasks and lots of I/O, so libnative is slightly more
appropriate for rustc to use itself.
This should also help the rusti bot which is currently incompatible with libuv.
tidy has some limitations (e.g. the "checked in binaries" check doesn't
and can't actually check git), and so it's useful to run tests without
running tidy occasionally.
This trades an O(n) allocation + memcpy for a O(1) proc allocation (for
the destructor). Most users only need &[u8] anyway (all of the users in
the main repo), and so this offers large gains.
These two containers are indeed collections, so their place is in
libcollections, not in libstd. There will always be a hash map as part of the
standard distribution of Rust, but by moving it out of the standard library it
makes libstd that much more portable to more platforms and environments.
This conveniently also removes the stuttering of 'std::hashmap::HashMap',
although 'collections::HashMap' is only one character shorter.
Two optimizations:
1. Compress `foo.bc` in each rlib with `flate`. These are just taking up space and are only used with LTO, no need for LTO to be speedy.
2. Stop install `librustc.rlib` and friends, this is a *huge* source of bloat. There's no need for us to install static libraries for these components.
cc #12440
tidy has some limitations (e.g. the "checked in binaries" check doesn't
and can't actually check git), and so it's useful to run tests without
running tidy occasionally.
You rarely want to statically link against librustc and friends, so there's no
real reason to install the rlib version of these libraries, especially because
the rlibs are massive.
LLVM's tools are not contained in the local directory if --llvm-root is used by
the ./configure script. This fixes the installation path to be the root provided
by --llvm-root.
The new methodology can be found in the re-worded comment, but the gist of it is
that -C prefer-dynamic doesn't turn off static linkage. The error messages
should also be a little more sane now.
Closes#12133
The new methodology can be found in the re-worded comment, but the gist of it is
that -C prefer-dynamic doesn't turn off static linkage. The error messages
should also be a little more sane now.
Closes#12133
Work toward #9876.
This adds `prepare.mk`, which is simply a more heavily-parameterized `install.mk`, then uses `prepare` to implement both `install` and the windows installer (`dist`). Smoke tested on both Linux and Windows.
Because the build system treats Makefile.in and the .mk files slightly
differently (.in is copied, .mk are included), this makes the system
more uniform. Fewer build system changes will require a complete
reconfigure.
Currently when you run `make -jN` it's likely that you'll remove compiler-rt and
then it won't get cp'd back into the right place. I believe the reason for this
is that the compiler-rt library target never got updated so make decided it
never needed to copy the files back into place. The files were all there at the
beginning of `make`, but then we may clean out the stage0 versions if we unzip
the snapshot again.
Includes an upstream commit by pcwalton to improve codegen of our enums getting
moved around.
This also introduces a new commit on top of our stack of patches to fix a mingw32 build issue. I have submitted the patch upstream: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140210/204653.html
I verified that this builds on the try bots, which amazes me because I think that c++11 is turned on now, but I guess we're still lucky!
Closes#10613 (pcwalton's patch landed)
Closes#11992 (llvm has removed these options)
Two unfortunate allocations were wrapping a proc() in a proc() with
GreenTask::build_start_wrapper, and then boxing this proc in a ~proc() inside of
Context::new(). Both of these allocations were a direct result from two
conditions:
1. The Context::new() function has a nice api of taking a procedure argument to
start up a new context with. This inherently required an allocation by
build_start_wrapper because extra code needed to be run around the edges of a
user-provided proc() for a new task.
2. The initial bootstrap code only understood how to pass one argument to the
next function. By modifying the assembly and entry points to understand more
than one argument, more information is passed through in registers instead of
allocating a pointer-sized context.
This is sadly where I end up throwing mips under a bus because I have no idea
what's going on in the mips context switching code and don't know how to modify
it.
Closes#7767
cc #11389
Previously crates like `green` and `native` would still depend on their
parents when running `make check-stage2-green NO_REBUILD=1`, this
ensures that they only depend on their source files.
Also, apply NO_REBUILD to the crate doc tests, so, for example,
`check-stage2-doc-std` will use an already compiled `rustdoc` directly.
These are ancient. I removed a bunch of questions that are less relevant - or completely unrelevant, updated other entries, and removed things that are already better expressed elsewhere.
libextra is currently being split into several crates. This commit adds
them all to the dist target in order to have them in the final tarballs.
Signed-off-by: Luca Bruno <lucab@debian.org>
src/README.txt has been renamed in a30d61b05a, make dist is
thus failing as unable to find it.
This commit makes the dist target working again.
Signed-off-by: Luca Bruno <lucab@debian.org>
Part of #8784
Changes:
- Everything labeled under collections in libextra has been moved into a new crate 'libcollection'.
- Renamed container.rs to deque.rs, since it was no longer 'container traits for extra', just a deque trait.
- Crates that depend on the collections have been updated and dependencies sorted.
- I think I changed all the imports in the tests to make sure it works. I'm not entirely sure, as near the end of the tests there was yet another `use` that I forgot to change, and when I went to try again, it started rebuilding everything, which I don't currently have time for.
There will probably be incompatibility between this and the other pull requests that are splitting up libextra. I'm happy to rebase once those have been merged.
The tests I didn't get to run should pass. But I can redo them another time if they don't.
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all options
stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the directory of
the crate file.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all
options stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the current
directory of the process.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
- `extra::json` didn't make the cut, because of `extra::json` required
dep on `extra::TreeMap`. If/when `extra::TreeMap` moves out of `extra`,
then `extra::json` could move into `serialize`
- `libextra`, `libsyntax` and `librustc` depend on the newly created
`libserialize`
- The extensions to various `extra` types like `DList`, `RingBuf`, `TreeMap`
and `TreeSet` for `Encodable`/`Decodable` were moved into the respective
modules in `extra`
- There is some trickery, evident in `src/libextra/lib.rs` where a stub
of `extra::serialize` is set up (in `src/libextra/serialize.rs`) for
use in the stage0 build, where the snapshot rustc is still making
deriving for `Encodable` and `Decodable` point at extra. Big props to
@huonw for help working out the re-export solution for this
extra: inline extra::serialize stub
fix stuff clobbered in rebase + don't reexport serialize::serialize
no more globs in libserialize
syntax: fix import of libserialize traits
librustc: fix bad imports in encoder/decoder
add serialize dep to librustdoc
fix failing run-pass tests w/ serialize dep
adjust uuid dep
more rebase de-clobbering for libserialize
fixing tests, pushing libextra dep into cfg(test)
fix doc code in extra::json
adjust index.md links to serialize and uuid library
Previously, the check-fast and check-lite test suites weren't picking up all
target crates, rather just std/extra. In order to ensure that all of our crates
work on windows, I've modified these rules to build the test suites for all
TARGET_CRATES members. Note that this still excludes rustc/syntax/rustdoc.
In line with the dissolution of libextra - #8784 - this moves arena and glob into
their own respective modules. Updates .gitignore with the entries
doc/{arena,glob} in accordance.
This changes android testing to upload *all* target crates rather than just a
select subset. This should unblock #11867 which is introducing a libglob
dependency in testing.
This changes android testing to upload *all* target crates rather than just a
select subset. This should unblock #11867 which is introducing a libglob
dependency in testing.