Three small changes:
1. Re-organize headers in the Strings guide so they show up correctly.
2. build the strings guide with the other docs
3. include the strings guide in the list of guides
Add libunicode; move unicode functions from core
- created new crate, libunicode, below libstd
- split `Char` trait into `Char` (libcore) and `UnicodeChar` (libunicode)
- Unicode-aware functions now live in libunicode
- `is_alphabetic`, `is_XID_start`, `is_XID_continue`, `is_lowercase`,
`is_uppercase`, `is_whitespace`, `is_alphanumeric`, `is_control`, `is_digit`,
`to_uppercase`, `to_lowercase`
- added `width` method in UnicodeChar trait
- determines printed width of character in columns, or None if it is a non-NULL control character
- takes a boolean argument indicating whether the present context is CJK or not (characters with 'A'mbiguous widths are double-wide in CJK contexts, single-wide otherwise)
- split `StrSlice` into `StrSlice` (libcore) and `UnicodeStrSlice` (libunicode)
- functionality formerly in `StrSlice` that relied upon Unicode functionality from `Char` is now in `UnicodeStrSlice`
- `words`, `is_whitespace`, `is_alphanumeric`, `trim`, `trim_left`, `trim_right`
- also moved `Words` type alias into libunicode because `words` method is in `UnicodeStrSlice`
- unified Unicode tables from libcollections, libcore, and libregex into libunicode
- updated `unicode.py` in `src/etc` to generate aforementioned tables
- generated new tables based on latest Unicode data
- added `UnicodeChar` and `UnicodeStrSlice` traits to prelude
- libunicode is now the collection point for the `std::char` module, combining the libunicode functionality with the `Char` functionality from libcore
- thus, moved doc comment for `char` from `core::char` to `unicode::char`
- libcollections remains the collection point for `std::str`
The Unicode-aware functions that previously lived in the `Char` and `StrSlice` traits are no longer available to programs that only use libcore. To regain use of these methods, include the libunicode crate and `use` the `UnicodeChar` and/or `UnicodeStrSlice` traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar and UnicodeStrSlice have been added to the prelude.
closes#15224
[breaking-change]
- unicode tests live in coretest crate
- libcollections str tests need UnicodeChar trait.
- libregex perlw tests were checking a char in the Alphabetic category,
\x2161. Confirmed perl 5.18 considers this a \w character. Changed to
\x2961, which is not \w as the test expects.
This commit disables rustc's emission of rpath attributes into dynamic libraries
and executables by default. The functionality is still preserved, but it must
now be manually enabled via a `-C rpath` flag.
This involved a few changes to the local build system:
* --disable-rpath is now the default configure option
* Makefiles now prefer our own LD_LIBRARY_PATH over the user's LD_LIBRARY_PATH
in order to support building rust with rust already installed.
* The compiletest program was taught to correctly pass through the aux dir as a
component of LD_LIBRARY_PATH in more situations.
The major impact of this change is that neither rustdoc nor rustc will work
out-of-the-box in all situations because they are dynamically linked. It must be
arranged to ensure that the libraries of a rust installation are part of the
LD_LIBRARY_PATH. The default installation paths for all platforms ensure this,
but if an installation is in a nonstandard location, then configuration may be
necessary.
Additionally, for all developers of rustc, it will no longer be possible to run
$target/stageN/bin/rustc out-of-the-box. The old behavior can be regained
through the `--enable-rpath` option to the configure script.
This change brings linux/mac installations in line with windows installations
where rpath is not possible.
Closes#11747
[breaking-change]
- created new crate, libunicode, below libstd
- split Char trait into Char (libcore) and UnicodeChar (libunicode)
- Unicode-aware functions now live in libunicode
- is_alphabetic, is_XID_start, is_XID_continue, is_lowercase,
is_uppercase, is_whitespace, is_alphanumeric, is_control,
is_digit, to_uppercase, to_lowercase
- added width method in UnicodeChar trait
- determines printed width of character in columns, or None if it is
a non-NULL control character
- takes a boolean argument indicating whether the present context is
CJK or not (characters with 'A'mbiguous widths are double-wide in
CJK contexts, single-wide otherwise)
- split StrSlice into StrSlice (libcore) and UnicodeStrSlice
(libunicode)
- functionality formerly in StrSlice that relied upon Unicode
functionality from Char is now in UnicodeStrSlice
- words, is_whitespace, is_alphanumeric, trim, trim_left, trim_right
- also moved Words type alias into libunicode because words method is
in UnicodeStrSlice
- unified Unicode tables from libcollections, libcore, and libregex into
libunicode
- updated unicode.py in src/etc to generate aforementioned tables
- generated new tables based on latest Unicode data
- added UnicodeChar and UnicodeStrSlice traits to prelude
- libunicode is now the collection point for the std::char module,
combining the libunicode functionality with the Char functionality
from libcore
- thus, moved doc comment for char from core::char to unicode::char
- libcollections remains the collection point for std::str
The Unicode-aware functions that previously lived in the Char and
StrSlice traits are no longer available to programs that only use
libcore. To regain use of these methods, include the libunicode crate
and use the UnicodeChar and/or UnicodeStrSlice traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar
and UnicodeStrSlice have been added to the prelude.
closes#15224
[breaking-change]
The compiler will no longer insert a hash or version into a filename by default.
Instead, all output is simply based off the crate name being compiled. For
example, a crate name of `foo` would produce the following outputs:
* bin => foo
* rlib => libfoo.rlib
* dylib => libfoo.{so,dylib} or foo.dll
* staticlib => libfoo.a
The old behavior has been moved behind a new codegen flag,
`-C extra-filename=<hash>`. For example, with the "extra filename" of `bar` and
a crate name of `foo`, the following outputs would be generated:
* bin => foo (same old behavior)
* rlib => libfoobar.rlib
* dylib => libfoobar.{so,dylib} or foobar.dll
* staticlib => libfoobar.a
The makefiles have been altered to pass a hash by default to invocations of
`rustc` so all installed rust libraries will have a hash in their filename. This
is done because the standard libraries are intended to be installed into
privileged directories such as /usr/local. Additionally, it involves very few
build system changes!
RFC: 0035-remove-crate-id
[breaking-change]
This makes the `in-header`, `markdown-before-content`, and `markdown-after-content` options available to `rustdoc` when generating documentation for any crate.
Before, these options were only available when creating documentation *from* markdown. Now, they are available when generating documentation from source.
This also updates the `rustdoc -h` output to reflect these changes. It does not update the `man rustdoc` page, nor does it update the documentation in [the `rustdoc` manual](http://doc.rust-lang.org/rustdoc.html).
Libcore's test infrastructure is complicated by the fact that many lang
items are defined in the crate. The current approach (realcore/realstd
imports) is hacky and hard to work with (tests inside of core::cmp
haven't been run for months!).
Moving tests to a separate crate does mean that they can only test the
public API of libcore, but I don't feel that that is too much of an
issue. The only tests that I had to get rid of were some checking the
various numeric formatters, but those are also exercised through normal
format! calls in other tests.
In line with what @brson, @cmr, @nikomatsakis and I discussed this morning, my
redux of the tutorial will be implemented as the Guide. This way, I can work in
small iterations, rather than dropping a huge PR, which is hard to review. In
addition, the community can observe my work as I'm doing it.
This adds a note in line with [this comment][reddit] that clarifies the state
of the tutorial, and the community's involvement with it.
[reddit]: http://www.reddit.com/r/rust/comments/28bew8/rusts_documentation_is_about_to_drastically/ci9c98k
The aim of these changes is not working out a generic bi-endianness architectures support but to allow people develop for little endian MIPS machines (issue #7190).
This commit disables rustc's emission of rpath attributes into dynamic libraries
and executables by default. The functionality is still preserved, but it must
now be manually enabled via a `-C rpath` flag.
This involved a few changes to the local build system:
* --disable-rpath is now the default configure option
* Makefiles now prefer our own LD_LIBRARY_PATH over the user's LD_LIBRARY_PATH
in order to support building rust with rust already installed.
* The compiletest program was taught to correctly pass through the aux dir as a
component of LD_LIBRARY_PATH in more situations.
The major impact of this change is that neither rustdoc nor rustc will work
out-of-the-box in all situations because they are dynamically linked. It must be
arranged to ensure that the libraries of a rust installation are part of the
LD_LIBRARY_PATH. The default installation paths for all platforms ensure this,
but if an installation is in a nonstandard location, then configuration may be
necessary.
Additionally, for all developers of rustc, it will no longer be possible to run
$target/stageN/bin/rustc out-of-the-box. The old behavior can be regained
through the `--enable-rpath` option to the configure script.
This change brings linux/mac installations in line with windows installations
where rpath is not possible.
Closes#11747
[breaking-change]
Closes#14888 (Allow disabling jemalloc as the memory allocator)
Closes#14905 (rustc: Improve span for error about using a method as a field.)
Closes#14920 (Fix#14915)
Closes#14924 (Add a Syntastic plugin for Rust.)
Closes#14935 (debuginfo: Correctly handle indirectly recursive types)
Closes#14938 (Reexport `num_cpus` in `std::os`. Closes#14707)
Closes#14941 (std: Don't fail the task when a Future is dropped)
Closes#14942 (rustc: Don't mark type parameters as exported)
Closes#14943 (doc: Fix a link in the FAQ)
Closes#14944 (Update "use" to "uses" on ln186)
Closes#14949 (Update repo location)
Closes#14950 (fix typo in the libc crate)
Closes#14951 (Update Sublime Rust github link)
Closes#14953 (Fix --disable-rpath and tests)
This involved a few changes to the local build system:
* Makefiles now prefer our own LD_LIBRARY_PATH over the user's LD_LIBRARY_PATH
in order to support building rust with rust already installed.
* The compiletest program was taught to correctly pass through the aux dir as a
component of LD_LIBRARY_PATH in more situations.
This change was spliced out of #14832 to consist of just the fixes to running
tests without an rpath setting embedded in executables.
It seems in one of rebases I’ve resolved conflicts wrong and left one redundant line, it is absent in current master and it might cause compilation failure by copying file into itself.
Rust no longer has support for JIT compilation, so it doesn't currently
require a PaX MPROTECT exception. The extended attributes are preferred
over modifying the binaries so it's not actually going to work on most
systems like this anyway.
If JIT compilation ends up being supported again, it should handle this
by *always* applying the exception via an extended attribute without
performing auto-detection of PaX on the host. The `paxctl` tool is only
necessary with the older method involving modifying the ELF binary.
This adds a new configure option, --jemalloc-root, which will specify a location
at which libjemalloc_pic.a must live. This library is then used for the build
triple as the jemalloc library to link.
This commit is the final step in the libstd facade, #13851. The purpose of this
commit is to move libsync underneath the standard library, behind the facade.
This will allow core primitives like channels, queues, and atomics to all live
in the same location.
There were a few notable changes and a few breaking changes as part of this
movement:
* The `Vec` and `String` types are reexported at the top level of libcollections
* The `unreachable!()` macro was copied to libcore
* The `std::rt::thread` module was moved to librustrt, but it is still
reexported at the same location.
* The `std::comm` module was moved to libsync
* The `sync::comm` module was moved under `sync::comm`, and renamed to `duplex`.
It is now a private module with types/functions being reexported under
`sync::comm`. This is a breaking change for any existing users of duplex
streams.
* All concurrent queues/deques were moved directly under libsync. They are also
all marked with #![experimental] for now if they are public.
* The `task_pool` and `future` modules no longer live in libsync, but rather
live under `std::sync`. They will forever live at this location, but they may
move to libsync if the `std::task` module moves as well.
[breaking-change]
When generating documentation, rustdoc has the ability to generate relative
links within the current distribution of crates to one another. To do this, it
must recognize when a crate's documentation is in the same output directory. The
current threshold for "local documentation for crate X being available" is
whether the directory "doc/X" exists.
This change modifies the build system to have new dependencies for each
directory of upstream crates for a rustdoc invocation. This will ensure that
when building documentation that all the crates in the standard distribution are
guaranteed to have relative links to one another.
This change is prompted by guaranteeing that offline docs always work with one
another. Before this change, races could mean that some docs were built before
others, and hence may have http links when relative links would suffice.
Closes#14747
When generating documentation, rustdoc has the ability to generate relative
links within the current distribution of crates to one another. To do this, it
must recognize when a crate's documentation is in the same output directory. The
current threshold for "local documentation for crate X being available" is
whether the directory "doc/X" exists.
This change modifies the build system to have new dependencies for each
directory of upstream crates for a rustdoc invocation. This will ensure that
when building documentation that all the crates in the standard distribution are
guaranteed to have relative links to one another.
This change is prompted by guaranteeing that offline docs always work with one
another. Before this change, races could mean that some docs were built before
others, and hence may have http links when relative links would suffice.
Closes#14747
As part of the libstd facade efforts, this commit extracts the runtime interface
out of the standard library into a standalone crate, librustrt. This crate will
provide the following services:
* Definition of the rtio interface
* Definition of the Runtime interface
* Implementation of the Task structure
* Implementation of task-local-data
* Implementation of task failure via unwinding via libunwind
* Implementation of runtime initialization and shutdown
* Implementation of thread-local-storage for the local rust Task
Notably, this crate avoids the following services:
* Thread creation and destruction. The crate does not require the knowledge of
an OS threading system, and as a result it seemed best to leave out the
`rt::thread` module from librustrt. The librustrt module does depend on
mutexes, however.
* Implementation of backtraces. There is no inherent requirement for the runtime
to be able to generate backtraces. As will be discussed later, this
functionality continues to live in libstd rather than librustrt.
As usual, a number of architectural changes were required to make this crate
possible. Users of "stable" functionality will not be impacted by this change,
but users of the `std::rt` module will likely note the changes. A list of
architectural changes made is:
* The stdout/stderr handles no longer live directly inside of the `Task`
structure. This is a consequence of librustrt not knowing about `std::io`.
These two handles are now stored inside of task-local-data.
The handles were originally stored inside of the `Task` for perf reasons, and
TLD is not currently as fast as it could be. For comparison, 100k prints goes
from 59ms to 68ms (a 15% slowdown). This appeared to me to be an acceptable
perf loss for the successful extraction of a librustrt crate.
* The `rtio` module was forced to duplicate more functionality of `std::io`. As
the module no longer depends on `std::io`, `rtio` now defines structures such
as socket addresses, addrinfo fiddly bits, etc. The primary change made was
that `rtio` now defines its own `IoError` type. This type is distinct from
`std::io::IoError` in that it does not have an enum for what error occurred,
but rather a platform-specific error code.
The native and green libraries will be updated in later commits for this
change, and the bulk of this effort was put behind updating the two libraries
for this change (with `rtio`).
* Printing a message on task failure (along with the backtrace) continues to
live in libstd, not in librustrt. This is a consequence of the above decision
to move the stdout/stderr handles to TLD rather than inside the `Task` itself.
The unwinding API now supports registration of global callback functions which
will be invoked when a task fails, allowing for libstd to register a function
to print a message and a backtrace.
The API for registering a callback is experimental and unsafe, as the
ramifications of running code on unwinding is pretty hairy.
* The `std::unstable::mutex` module has moved to `std::rt::mutex`.
* The `std::unstable::sync` module has been moved to `std::rt::exclusive` and
the type has been rewritten to not internally have an Arc and to have an RAII
guard structure when locking. Old code should stop using `Exclusive` in favor
of the primitives in `libsync`, but if necessary, old code should port to
`Arc<Exclusive<T>>`.
* The local heap has been stripped down to have fewer debugging options. None of
these were tested, and none of these have been used in a very long time.
[breaking-change]
This grows a new option inside of rustdoc to add the ability to submit examples
to an external website. If the `--markdown-playground-url` command line option
or crate doc attribute `html_playground_url` is present, then examples will have
a button on hover to submit the code to the playground specified.
This commit enables submission of example code to play.rust-lang.org. The code
submitted is that which is tested by rustdoc, not necessarily the exact code
shown in the example.
Closes#14654
As with the previous commit with `librand`, this commit shuffles around some
`collections` code. The new state of the world is similar to that of librand:
* The libcollections crate now only depends on libcore and liballoc.
* The standard library has a new module, `std::collections`. All functionality
of libcollections is reexported through this module.
I would like to stress that this change is purely cosmetic. There are very few
alterations to these primitives.
There are a number of notable points about the new organization:
* std::{str, slice, string, vec} all moved to libcollections. There is no reason
that these primitives shouldn't be necessarily usable in a freestanding
context that has allocation. These are all reexported in their usual places in
the standard library.
* The `hashmap`, and transitively the `lru_cache`, modules no longer reside in
`libcollections`, but rather in libstd. The reason for this is because the
`HashMap::new` contructor requires access to the OSRng for initially seeding
the hash map. Beyond this requirement, there is no reason that the hashmap
could not move to libcollections.
I do, however, have a plan to move the hash map to the collections module. The
`HashMap::new` function could be altered to require that the `H` hasher
parameter ascribe to the `Default` trait, allowing the entire `hashmap` module
to live in libcollections. The key idea would be that the default hasher would
be different in libstd. Something along the lines of:
// src/libstd/collections/mod.rs
pub type HashMap<K, V, H = RandomizedSipHasher> =
core_collections::HashMap<K, V, H>;
This is not possible today because you cannot invoke static methods through
type aliases. If we modified the compiler, however, to allow invocation of
static methods through type aliases, then this type definition would
essentially be switching the default hasher from `SipHasher` in libcollections
to a libstd-defined `RandomizedSipHasher` type. This type's `Default`
implementation would randomly seed the `SipHasher` instance, and otherwise
perform the same as `SipHasher`.
This future state doesn't seem incredibly far off, but until that time comes,
the hashmap module will live in libstd to not compromise on functionality.
* In preparation for the hashmap moving to libcollections, the `hash` module has
moved from libstd to libcollections. A previously snapshotted commit enables a
distinct `Writer` trait to live in the `hash` module which `Hash`
implementations are now parameterized over.
Due to using a custom trait, the `SipHasher` implementation has lost its
specialized methods for writing integers. These can be re-added
backwards-compatibly in the future via default methods if necessary, but the
FNV hashing should satisfy much of the need for speedier hashing.
A list of breaking changes:
* HashMap::{get, get_mut} no longer fails with the key formatted into the error
message with `{:?}`, instead, a generic message is printed. With backtraces,
it should still be not-too-hard to track down errors.
* The HashMap, HashSet, and LruCache types are now available through
std::collections instead of the collections crate.
* Manual implementations of hash should be parameterized over `hash::Writer`
instead of just `Writer`.
[breaking-change]
This commit shuffles around some of the `rand` code, along with some
reorganization. The new state of the world is as follows:
* The librand crate now only depends on libcore. This interface is experimental.
* The standard library has a new module, `std::rand`. This interface will
eventually become stable.
Unfortunately, this entailed more of a breaking change than just shuffling some
names around. The following breaking changes were made to the rand library:
* Rng::gen_vec() was removed. This has been replaced with Rng::gen_iter() which
will return an infinite stream of random values. Previous behavior can be
regained with `rng.gen_iter().take(n).collect()`
* Rng::gen_ascii_str() was removed. This has been replaced with
Rng::gen_ascii_chars() which will return an infinite stream of random ascii
characters. Similarly to gen_iter(), previous behavior can be emulated with
`rng.gen_ascii_chars().take(n).collect()`
* {IsaacRng, Isaac64Rng, XorShiftRng}::new() have all been removed. These all
relied on being able to use an OSRng for seeding, but this is no longer
available in librand (where these types are defined). To retain the same
functionality, these types now implement the `Rand` trait so they can be
generated with a random seed from another random number generator. This allows
the stdlib to use an OSRng to create seeded instances of these RNGs.
* Rand implementations for `Box<T>` and `@T` were removed. These seemed to be
pretty rare in the codebase, and it allows for librand to not depend on
liballoc. Additionally, other pointer types like Rc<T> and Arc<T> were not
supported. If this is undesirable, librand can depend on liballoc and regain
these implementations.
* The WeightedChoice structure is no longer built with a `Vec<Weighted<T>>`,
but rather a `&mut [Weighted<T>]`. This means that the WeightedChoice
structure now has a lifetime associated with it.
* The `sample` method on `Rng` has been moved to a top-level function in the
`rand` module due to its dependence on `Vec`.
cc #13851
[breaking-change]
This commit moves reflection (as well as the {:?} format modifier) to a new
libdebug crate, all of which is marked experimental.
This is a breaking change because it now requires the debug crate to be
explicitly linked if the :? format qualifier is used. This means that any code
using this feature will have to add `extern crate debug;` to the top of the
crate. Any code relying on reflection will also need to do this.
Closes#12019
[breaking-change]
I mostly tried to remain backwards compatible with old invocations of
the `configure` script; if you do not want to use `CC` et al., you
should not have to; you can keep using `--enable-clang` and/or
`--enable-ccache`.
The overall intention is to capture the following precedences for
guessing the C compiler:
1. Value of `CC` at make invocation time.
2. Value of `CC` at configure invocation time.
3. Compiler inferred at configure invocation time (`gcc` or `clang`).
The strategy is to check (at `configure` time) if each of the
environment variables is set, and if so, save its value in a
corresponding `CFG_` variable (e.g. `CFG_CC`).
Then, in the makefiles, if `CC` is not set but `CFG_CC` is, then we
use the `CFG_CC` setting as `CC`.
Also, I fold the potential user-provided `CFLAGS` and `CXXFLAGS`
values into all of the per-platform `CFLAGS` and `CXXFLAGS` settings.
(This was opposed to adding `$(CFLAGS)` in an ad-hoc manner to various
parts of the mk files.)
Fix#13805.
----
Note that if you try to set the compiler to clang via the `CC` and
`CXX` environment variables, you will probably need to also set
`CXXFLAGS` to `--enable-libcpp` so that LLVM will be configured
properly.
----
Introduce CFG_USING_CLANG, which is distinguished from
CFG_ENABLE_CLANG because the former represents "we think we're using
clang, choose appropriate warning-control options" while the latter
represents "we asked configure (or the host required) that we attempt
to use clang, so check that we have an appropriate version of clang."
The main reason I added this is that I wanted to allow the user to
choose clang via setting the `CC` environment variable, but I did not
want that method of selection to get confused with the user passing
the `--enable-clang` option.
----
A digression: The `configure` script does not infer the compiler
setting if `CC` is set; but if `--enable-clang` was passed, then it
*does* still attempt to validate that the clang version is compatible.
Supporting this required revising `CLANG_VERSION` check to be robust
in face of user-provided `CC` value.
In particular, on Travis, the `CC` is set to `gcc` and so the natural
thing to do is to attempt to use `gcc` as the compiler, but Travis is
also passing `--enable-clang` to configure. So, what is the right
answer in the face of these contradictory requests?
One approach would be to have `--enable-clang` supersede the setting
for `CC` (and instead just call whatever we inferred for `CFG_CLANG`).
That sounds maximally inflexible to me (pnkfelix): a developer
requesting a `CC` value probably wants it respected, and should be
able to set it to something else; it is harder for that developer to
hack our configure script to change its inferred path to clang.
A second approach would be to blindly use the `CC` value but keep
going through the clang version check when `--enable-clang` is turned
on. But on Travis (a Linux host), the `gcc` invocation won't print a
clang version, so we would not get past the CLANG_VERSION check in
that context.
A third approach would be to never run the CLANG_VERSION check if `CC`
is explicitly set. That is not a terrible idea; but if the user uses
`CC` to pass in a path to some other version of clang that they want
to test, probably should still send that through the `CLANG_VERSION`
check.
So in the end I (pnkfelix) took a fourth approach: do the
CLANG_VERSION check if `CC` is unset *or* if `CC` is set to a string
ending with `clang`. This way setting `CC` to things like
`path/to/clang` or `ccache clang` will still go through the
CLANG_VERSION check, while setting `CC` to `gcc` or some unknown
compiler will skip the CLANG_VERSION check (regardless of whether the
user passed --enable-clang to `configure`).
----
Drive-by fixes:
* The call that sets `CFG_CLANG_VERSION` was quoting `"$CFG_CC"` in
its invocation, but that does not play nicely with someone who sets
`$CFG_CC` to e.g. `ccache clang`, since you do not want to intepret
that whole string as a command.
(On the other hand, a path with spaces might need the quoted
invocation. Not sure which one of these corner use-cases is more
important to support.)
* Fix chk_cc error message to point user at `gcc` not `cc`.
Two line summary: Distinguish HOST_RPATH and TARGET_RPATH; added
RPATH_LINK_SEARCH; skip tests broken in stage1; general cleanup.
`HOST_RPATH_VAR$(1)_T_$(2)_H_$(3)` and `TARGET_RPATH_VAR$(1)_T_$(2)_H_$(3)`
both match the format of the old `RPATH_VAR$(1)_T_$(2)_H_$(3)` (which
is still being set the same way that it was before, to one of either
HOST/TARGET depending on what stage we are building). Namely, the format
is <XXX>_RPATH_VAR = "<LD_LIB_PATH_ENVVAR>=<COLON_SEP_PATH_ENTRIES>"
What this commit does:
* Pass both of the (newly introduced) HOST and TARGET rpath setup vars
to `maketest.py`
* Update `maketest.py` to no longer update the LD_LIBRARY_PATH itself
Instead, it passes along the HOST and TARGET rpath setup vars in
environment variables `HOST_RPATH_ENV` and `TARGET_RPATH_ENV`
* Also, pass the current stage number to maketest.py; it in turn
passes it (via an env var) to run-make tests.
This allows the run-make tests to selectively change behavior
(e.g. turn themselves off) to deal with incompatibilities with
e.g. stage1.
* Cleanup: Distinguish in tools.mk between the command to run (`RUN`)
and the file to generate to drive that command (`RUN_BINFILE`). The
main thing this enables is that `RUN` can now setup the
`TARGET_RPATH_ENV` without having to dirty up the runner code in
each of the `run-make` Makefiles.
* Cleanup: Factored out commands to delete dylib/rlib into
REMOVE_DYLIBS/REMOVE_RLIBS.
There were places where we were only calling `rm $(call DYLIB,foo)`
even though we really needed to get rid of the whole glob (at least
based on alex's findings on #13753 that removing the symlink does not
suffice).
Therefore rather than peppering the code with the awkward
`rm $(TMPDIR)/$(call DYLIB_GLOB,foo)`, I instead introduced a common
`REMOVE_DYLIBS` user function that expands into that when called.
After I adding an analogous `REMOVE_RLIBS`, I changed all of the
existing calls that rm dylibs or rlibs to use these routines
instead.
Note that the latter is not a true refactoring since I may have
changed cases where it was our intent to only remove the sym-link.
(But if that is the case, then we need to more deeply investigate
alex's findings on #13753 where the system was still dynamically
loading up the non-symlinked libraries that it finds on the load
path.)
* Added RPATH_LINK_SEARCH command and use it on Linux.
On some platforms, namely Linux, when you have libboot.so that has
its internal rpath set (to e.g. $(ORIGIN)/path/to/HOSTDIR), the
linker still complains when you do the link step and it does not
know where to find libraries that libboot.so depends upon that live
in HOSTDIR (think e.g. librustuv.so).
As far as I can tell, the GNU linker will consult the
LD_LIBRARY_PATH as part of the linking process to find such
libraries. But if you want to be more careful and not override
LD_LIBRARY_PATH for the `gcc` invocation, then you need some other
way to tell the linker where it can find the libraries that
libboot.so needs. The solution to this on Linux is the
`-Wl,-rpath-link` command line option.
However, this command line option does not exist on Mac OS X, (which
appears to be figuring out how to resolve the libboot.dylib
dependency by some other means, perhaps by consulting the rpath
setting within libboot.dylib).
So, in order to abstract over this distinction, I added the
RPATH_LINK_SEARCH macro to the run-make infrastructure and added
calls to it where necessary to get Linux working. On architectures
other than Linux, the macro expands to nothing.
* Disable miscellaneous tests atop stage1.
* An especially interesting instance of the previous bullet point:
Excuse regex from doing rustdoc tests atop stage1.
This was a (nearly-) final step to get `make check-stage1` working
again.
The use of a special-case check for regex here is ugly but is
analogous other similar checks for regex such as the one that landed
in PR #13844.
The way this is written, the user will get a reminder that
doc-crate-regex is being skipped whenever their rules attempt to do
the crate documentation tests. This is deliberate: I want people
running `make check-stage1` to be reminded about which cases are
being skipped. (But if such echo noise is considered offensive, it
can obviously be removed.)
* Got windows working with the above changes.
This portion of the commit is a cleanup revision of the (previously
mentioned on try builds) re-architecting of how the LD_LIBRARY_PATH
setup and extension is handled in order to accommodate Windows' (1.)
use of `$PATH` for that purpose and (2.) use of spaces in `$PATH`
entries (problematic for make and for interoperation with tools at
the shell).
* In addition, since the code has been rearchitected to pass the
HOST_RPATH_DIR/TARGET_RPATH_DIR rather than a whole sh
environment-variable setting command, there is no need to for the
convert_path_spec calls in maketest.py, which in fact were put in
place to placate Windows but were now causing the Windows builds to
fail. Instead we just convert the paths to absolute paths just like
all of the other path arguments.
Also, note for makefile hackers: apparently you cannot quote operands
to `ifeq` in Makefile (or at least, you need to be careful about
adding them, e.g. to only one side).
This commit is part of the libstd facade RFC, issue #13851. This creates a new
library, liballoc, which is intended to be the core allocation library for all
of Rust. It is pinned on the basic assumption that an allocation failure is an
abort or failure.
This module has inherited the heap/libc_heap modules from std::rt, the owned/rc
modules from std, and the arc module from libsync. These three pointers are
currently the three most core pointer implementations in Rust.
The UnsafeArc type in std::sync should be considered deprecated and replaced by
Arc<Unsafe<T>>. This commit does not currently migrate to this type, but future
commits will continue this refactoring.
Use sync:1️⃣:Once to fetch the mach_timebase_info only once when
running precise_time_ns(). This helps because mach_timebase_info() is
surprisingly inefficient. Also fix the order of operations when applying
the timebase to the mach absolute time value.
This improves the time on my machine from
```
test tests::bench_precise_time_ns ... bench: 157 ns/iter (+/- 4)
```
to
```
test tests::bench_precise_time_ns ... bench: 38 ns/iter (+/- 3)
```
and it will get even faster once #14174 lands.
By default, jemalloc is building itself with -g3 if the local compiler supports
it. It looks like this is generating a good deal of debug info that windows
isn't optimizing out (on the order of 18MB). Windows gcc/ld is also not
optimizing this data away, causing hello world to be 18MB in size.
There's no current real need for debugging jemalloc to a great extent, so this
commit manually passes -g1 to override -g3 which jemalloc is using. This is
confirmed to drop the size of executables on windows back to a more reasonable
size (2.0MB, as they were before).
Closes#14144
By default, jemalloc is building itself with -g3 if the local compiler supports
it. It looks like this is generating a good deal of debug info that windows
isn't optimizing out (on the order of 18MB). Windows gcc/ld is also not
optimizing this data away, causing hello world to be 18MB in size.
There's no current real need for debugging jemalloc to a great extent, so this
commit manually passes -g1 to override -g3 which jemalloc is using. This is
confirmed to drop the size of executables on windows back to a more reasonable
size (2.0MB, as they were before).
Closes#14144
Passing `--pretty flowgraph=<NODEID>` makes rustc print a control flow graph.
In pratice, you will also need to pass the additional option:
`-o <FILE>` to emit output to a `.dot` file for graphviz.
(You can only print the flow-graph for a particular block in the AST.)
----
An interesting implementation detail is the way the code puts both the
node index (`cfg::CFGIndex`) and a reference to the payload
(`cfg::CFGNode`) into the single `Node` type that is used for
labelling and walking the graph. I had once mistakenly thought that I
only wanted the `cfg::CFGNode`, but for labelling, you really want the
cfg index too, rather than e.g. trying to use the `ast::NodeId` as the
label (which breaks down e.g. due to `ast::DUMMY_NODE_ID`).
----
As a drive-by fix, I had to fix `rustc::middle::cfg::construct`
interface to reflect changes that have happened on the master branch
while I was getting this integrated into the compiler. (The next
commit actually adds tests of the `--pretty flowgraph` functionality,
so that should ensure that the `rustc::middle::cfg` code does not go
stale again.)
The core library in theory has 0 dependencies, but in practice it has some in
order for it to be efficient. These dependencies are in the form of the basic
memory operations provided by libc traditionally, such as memset, memcmp, etc.
These functions are trivial to implement and themselves have 0 dependencies.
This commit adds a new crate, librlibc, which will serve the purpose of
providing these dependencies. The crate is never linked to by default, but is
available to be linked to by downstream consumers. Normally these functions are
provided by the system libc, but in other freestanding contexts a libc may not
be available. In these cases, librlibc will suffice for enabling execution with
libcore.
cc #10116
The current suite of benchmarks for the standard distribution take a significant
amount of time to run, but it's unclear whether we're gaining any benefit from
running them. Some specific pain points:
* No one is looking at the data generated by the benchmarks. We have no graphs
or analysis of what's happening, so all the data is largely being cast into
the void.
* No benchmark has ever uncovered a bug, they have always run successfully.
* Benchmarks not only take a significant amount of time to run, but also take a
significant amount of time to compile. I don't think we should mitigate this
for now because it's useful to ensure that they do indeed still compile.
This commit disables --bench test runs by default as part of `make check`,
flipping the NO_BENCH environment variable to a PLEASE_BENCH variable which will
manually enable benchmarking. If and when a dedicated bot is set up for
benchmarking, this flag can be enabled on that bot.
There's no need to include this specific flag just for android. We can
already deal with what it tries to solve by using -C linker=/path/to/cc
and -C ar=/path/to/ar. The Makefiles for rustc already set this up when
we're crosscompiling.
I did add the flag to compiletest though so it can find gdb. Though, I'm
pretty sure we don't run debuginfo tests on android anyways right now.
[breaking-change]
This adds a `std::rt::heap` module with a nice allocator API. It's a
step towards fixing #13094 and is a starting point for working on a
generic allocator trait.
The revision used for the jemalloc submodule is the stable 3.6.0 release.
Closes#11807
This code does not belong in libstd, and rather belongs in a dedicated crate. In
the future, the syntax::ext::format module should move to the fmt_macros crate
(hence the name of the crate), but for now the fmt_macros crate will only
contain the format string parser.
The entire fmt_macros crate is marked #[experimental] because it is not meant
for general consumption, only the format!() interface is officially supported,
not the internals.
This is a breaking change for anyone using the internals of std::fmt::parse.
Some of the flags have moved to std::fmt::rt, while the actual parsing support
has all moved to the fmt_macros library.
[breaking-change]
This code does not belong in libstd, and rather belongs in a dedicated crate. In
the future, the syntax::ext::format module should move to the fmt_macros crate
(hence the name of the crate), but for now the fmt_macros crate will only
contain the format string parser.
The entire fmt_macros crate is marked #[experimental] because it is not meant
for general consumption, only the format!() interface is officially supported,
not the internals.
This is a breaking change for anyone using the internals of std::fmt::parse.
Some of the flags have moved to std::fmt::rt, while the actual parsing support
has all moved to the fmt_macros library.
[breaking-change]
This pull request contains preparations for adding LLDB autotests:
+ the debuginfo tests are split into debuginfo-gdb and debuginfo-lldb
+ the `compiletest` tool is updated to support the debuginfo-lldb mode
+ tests.mk is modified to provide debuginfo-gdb and debuginfo-lldb make targets
+ GDB test cases are moved from `src/test/debug-info` to `src/test/debuginfo-gdb`
+ configure will now look for LLDB and set the appropriate CFG variables
+ the `lldb_batchmode.py` script is added to `src/etc`. It emulates GDB's batch mode
The LLDB autotests themselves are not part of this PR. Those will probable require some manual work on the test bots to make them work for the first time. Better to get these unproblematic preliminaries out of the way in a separate step.
This is the second step in implementing #13851. This PR cannot currently land until a snapshot exists with #13892, but I imagine that this review will take longer.
This PR refactors a large amount of functionality outside of the standard library into a new library, libcore. This new library has 0 dependencies (in theory). In practice, this library currently depends on these symbols being available:
* `rust_begin_unwind` and `rust_fail_bounds_check` - These are the two entry points of failure in libcore. The symbols are provided by libstd currently. In the future (see the bullets on #13851) this will be officially supported with nice error mesages. Additionally, there will only be one failure entry point once `std::fmt` migrates to libcore.
* `memcpy` - This is often generated by LLVM. This is also quite trivial to implement for any platform, so I'm not too worried about this.
* `memcmp` - This is required for comparing strings. This function is quite common *everywhere*, so I don't feel to bad about relying on a consumer of libcore to define it.
* `malloc` and `free` - This is quite unfortunate, and is a temporary stopgap until we deal with the `~` situation. More details can be found in the module `core::should_not_exist`
* `fmod` and `fmodf` - These exist because the `Rem` trait is defined in libcore, so the `Rem` implementation for floats must also be defined in libcore. I imagine that any platform using floating-point modulus will have these symbols anyway, and otherwise they will be optimized out.
* `fdim` and `fdimf` - Like `fmod`, these are from the `Signed` trait being defined in libcore. I don't expect this to be much of a problem
These dependencies all "Just Work" for now because libcore only exists as an rlib, not as a dylib.
The commits themselves are organized to show that the overall diff of this extraction is not all that large. Most modules were able to be moved with very few modifications. The primary module left out of this iteration is `std::fmt`. I plan on migrating the `fmt` module to libcore, but I chose to not do so at this time because it had implications on the `Writer` trait that I wanted to deal with in isolation. There are a few breaking changes in these commits, but they are fairly minor, and are all labeled with `[breaking-change]`.
The nastiest parts of this movement come up with `~[T]` and `~str` being language-defined types today. I believe that much of this nastiness will get better over time as we migrate towards `Vec<T>` and `Str` (or whatever the types will be named). There will likely always be some extension traits, but the situation won't be as bad as it is today.
Known deficiencies:
* rustdoc will get worse in terms of readability. This is the next issue I will tackle as part of #13851. If others think that the rustdoc change should happen first, I can also table this to fix rustdoc first.
* The compiler reveals that all these types are reexports via error messages like `core::option::Option`. This is filed as #13065, and I believe that issue would have a higher priority now. I do not currently plan on fixing that as part of #13851. If others believe that this issue should be fixed, I can also place it on the roadmap for #13851.
I recommend viewing these changes on a commit-by-commit basis. The overall change is likely too overwhelming to take in.
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
This primary fix brought on by this upgrade is the proper matching of the ```
and ~~~ doc blocks. This also moves hoedown to a git submodule rather than a
bundled repository.
Additionally, hoedown is stricter about code blocks, so this ended up fixing a
lot of invalid code blocks (ending with " ```" instead of "```", or ending with
"~~~~" instead of "~~~").
Closes#12776
These fonts were moved into place by rust's makefiles, but rustdoc is widely
used outside of rustc itself. This moves the fonts into the rustdoc binary,
similarly to the other static assets, and writes them to the output location
whenever rustdoc generates documentation.
Closes#13593Closes#13787
There is currently not much precedent for target crates requiring syntax
extensions to compile their test versions. This dependency is possible, but
can't be encoded through the normal means of DEPS_regex because it is a
test-only dependency and it must be a *host* dependency (it's a syntax
extension).
Closes#13844
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
This allows the use of syntax extensions when cross-compiling (fixing #12102). It does this by encoding the target triple in the crate metadata and checking it when searching for files. Currently the crate triple must match the host triple when there is a macro_registrar_fn, it must match the target triple when linking, and can match either when only macro_rules! macros are used.
due to carelessness, this is pretty much a duplicate of https://github.com/mozilla/rust/pull/13450.
This adds the target triple to the crate metadata.
When searching for a crate the phase (link, syntax) is taken into account.
During link phase only crates matching the target triple are considered.
During syntax phase, either the target or host triple will be accepted, unless
the crate defines a macro_registrar, in which case only the host triple will
match.
Instead of passing through CC which may have things like ccache and other
arguments (when using clang) this commit filters out the necessary arguments
from CC to pass the right linker to rustc.
Closes#13562
This is a bit of an interesting upgrade to LLVM. Upstream LLVM has started using C++11 features, so they require a C++11 compiler to build. I've updated all the bots to have a C++11 compiler, and they appear to be building LLVM successfully:
* Linux bots - I added gcc/g++ 4.7 (good enough)
* Android bots - same as the linux ones
* Mac bots - I installed the most recent command line tools for Lion which gives us clang 3.2, but LLVM wouldn't build unless it was explicitly asked to link to `libc++` instead of `libstdc++`. This involved tweaking `mklldeps.py` and the `configure` script to get things to work out
* Windows bots - mingw-w64 has gcc 4.8.1 which is sufficient for building LLVM (hurray!)
* BSD bots - I updated FreeBSD to 10.0 which brought with it a relevant version of clang.
The largest fallout I've seen so far is that the test suite doesn't work at all on FreeBSD 10. We've already stopped gating on FreeBSD due to #13427 (we used to be on freebsd 9), so I don't think this puts us in too bad of a situation. I will continue to attempt to fix FreeBSD and the breakage on there.
The LLVM update brings with it all of the recently upstreamed LLVM patches. We only have one local patch now which is just an optimization, and isn't required to use upstream LLVM. I want to maintain compatibility with LLVM 3.3 and 3.4 while we can, and this upgrade is keeping us up to date with the 3.5 release. Once 3.5 is release we will in theory no longer require a bundled LLVM.
The goal of the snapshot bots is to produce binaries which can run in as many
locations as possible. Currently we build on Centos 6 for this reason, but with
LLVM's update to C++11, this reduces the number of platforms that we could
possibly run on.
This adds a --enable-llvm-static-stdcpp option to the ./configure script for
Rust which will enable building a librustc with a static dependence on
libstdc++. This normally isn't necessary, but this option can be used on the
snapshot builders in order to continue to make binaries which should be able to
run in as many locations as possible.
In the past, windows was installed from stage3 to guarantee convergence between
the host and target artifacts, but syntax extensions on all platforms are
currently relying on convergence, so special casing this one platform has become
less relevant over time.
This will also have the added benefit of dealing with #13474 and #13491. These
issues will be closed after next next nightly is confirmed to fix them.
This is intended to be the first thing somebody new to the language reads about Rust. It is supposed to be simple and intriguing, to give the user an idea of whether Rust is appropriate for them, and to hint that there's a lot of cool stuff to learn if they just keep diving deeper.
I'm particularly happy with the sequence of concurrency examples.
After removing absolute rpaths, cross compile builds (notably the nightly
builders) broke. This is because the RPATH was pointing at an empty directory
because only the rustc binary is copied over, not all of the target libraries.
This modifies the cross compile logic to fixup the rpath of the stage0
cross-compiled rustc to point to where it came from.
Concerns have been raised about using absolute rpaths in #11746, and this is the
first step towards not relying on rpaths at all. The only current use case for
an absolute rpath is when a non-installed rust builds an executable that then
moves from is built location. The relative rpath back to libstd and absolute
rpath to the installation directory still remain (CFG_PREFIX).
Closes#11746
Rebasing of #12754
First, documented the existing `CTEST_DISABLE_$(TEST_GROUP)` pattern
for conditionally disabling tests based on missing host features.
Added variant of above, `CTEST_DISABLE_NONSELFHOST_$(TEST_GROUP)`,
which is only queried in contexts where the target is not on the
CFG_HOST list (which I interpret as the list of targets that our host
can compatibly emulate; e.g. the example that i686 and x86_64 can in
theory run each others' tests).
Driveby fix: Remove redundant copy of
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec dependency declaration.
These are not installed anywhere, but are included under `./doc` for
those who want an offline copy with their nightlies. This increases the
size of the (compressed) tarball from 76 to 83 MB.
These syntax extensions need a place to be documented, and this starts passing a
`--cfg dox` parameter to `rustdoc` when building and testing documentation in
order to document macros so that they have no effect on the compiled crate, but
only documentation.
Closes#5605
These syntax extensions need a place to be documented, and this starts passing a
`--cfg dox` parameter to `rustdoc` when building and testing documentation in
order to document macros so that they have no effect on the compiled crate, but
only documentation.
Closes#5605
1. Fix a long-standing typo in the makefile: the relevant
CTEST_NAME here is `rpass-full` (with a dash), not
`rpass_full`.
2. The rpass-full tests depend on the complete set of target
libraries. Therefore, the rpass-full tests need to use
the dependencies held in the CSREQ-prefixed variable, not
the TLIBRUSTC_DEFAULT-prefixed variable.
Mac can't actually build our source tarballs because it's `tar`
command doesn't support the --exclude-vcs flag. This is just
a workaround to make our mac nightlies work (we get our source
tarballs from the linux bot).
This performs a few touch-ups to the OSX installer:
* A rust logo is shown during installation
* The installation happens to /usr/local by default (instead of /)
* A new welcome screen is shown that's slightly more relevant
This fixes some problems with
make verify-grammar
llnextgen still reports a lot of errors
FYI: My build directory /my-test/build is different from the source directory /my-test/rust.
cd /my-test/build
/my-test/rust/configure --prefix=/my-test/bin
make
make install
make verify-grammar
This performs a few touch-ups to the OSX installer:
* A rust logo is shown during installation
* The installation happens to /usr/local by default (instead of /)
* A new welcome screen is shown that's slightly more relevant
The previous dependency calculation was based on an arbitrary set of asterisks
at an arbitrary depth, but using the recursive version should be much more
robust in figuring out what's dependent.
When calling
make verify-grammar
rust.md cannot be found, because the
path to rust.md is missing.
The path is set to:
$(D)/rust.md
This can only be tested, when llnextgen is installed.
Signed-off-by: Jan Kobler <eng1@koblersystems.de>
The previous dependency calculation was based on an arbitrary set of asterisks
at an arbitrary depth, but using the recursive version should be much more
robust in figuring out what's dependent.
Closes#13118
This doesn't work quite right yet (we need to build packages for all hosts)
and I'm not ready to turn on new dist artifacts yet, but I want to start doing
dry runs for 0.10, so I'm turning this off for now.
After `make clean` I'm seeing the build break with
```
cp: cannot stat ‘x86_64-unknown-linux-gnu/rt/libbacktrace/.libs/libbacktrace.a’: No such file or directory
```
Deleteing the libbacktrace dir entirely on clean fixes.
This commit switches over the backtrace infrastructure from piggy-backing off
the RUST_LOG environment variable to using the RUST_BACKTRACE environment
variable (logging is now disabled in libstd).
This commit moves all logging out of the standard library into an external
crate. This crate is the new crate which is responsible for all logging macros
and logging implementation. A few reasons for this change are:
* The crate map has always been a bit of a code smell among rust programs. It
has difficulty being loaded on almost all platforms, and it's used almost
exclusively for logging and only logging. Removing the crate map is one of the
end goals of this movement.
* The compiler has a fair bit of special support for logging. It has the
__log_level() expression as well as generating a global word per module
specifying the log level. This is unfairly favoring the built-in logging
system, and is much better done purely in libraries instead of the compiler
itself.
* Initialization of logging is much easier to do if there is no reliance on a
magical crate map being available to set module log levels.
* If the logging library can be written outside of the standard library, there's
no reason that it shouldn't be. It's likely that we're not going to build the
highest quality logging library of all time, so third-party libraries should
be able to provide just as high-quality logging systems as the default one
provided in the rust distribution.
With a migration such as this, the change does not come for free. There are some
subtle changes in the behavior of liblog vs the previous logging macros:
* The core change of this migration is that there is no longer a physical
log-level per module. This concept is still emulated (it is quite useful), but
there is now only a global log level, not a local one. This global log level
is a reflection of the maximum of all log levels specified. The previously
generated logging code looked like:
if specified_level <= __module_log_level() {
println!(...)
}
The newly generated code looks like:
if specified_level <= ::log::LOG_LEVEL {
if ::log::module_enabled(module_path!()) {
println!(...)
}
}
Notably, the first layer of checking is still intended to be "super fast" in
that it's just a load of a global word and a compare. The second layer of
checking is executed to determine if the current module does indeed have
logging turned on.
This means that if any module has a debug log level turned on, all modules
with debug log levels get a little bit slower (they all do more expensive
dynamic checks to determine if they're turned on or not).
Semantically, this migration brings no change in this respect, but
runtime-wise, this will have a perf impact on some code.
* A `RUST_LOG=::help` directive will no longer print out a list of all modules
that can be logged. This is because the crate map will no longer specify the
log levels of all modules, so the list of modules is not known. Additionally,
warnings can no longer be provided if a malformed logging directive was
supplied.
The new "hello world" for logging looks like:
#[phase(syntax, link)]
extern crate log;
fn main() {
debug!("Hello, world!");
}
After `make clean' I'm seeing the build break with
```
cp: cannot stat ‘x86_64-unknown-linux-gnu/rt/libbacktrace/.libs/libbacktrace.a’: No such file or directory
```
Deleteing the libbacktrace dir entirely on clean fixes.
This commit shreds all remnants of libextra from the compiler and standard
distribution. Two modules, c_vec/tempfile, were moved into libstd after some
cleanup, and the other modules were moved to separate crates as seen fit.
Closes#8784Closes#12413Closes#12576
This aims to cover the basics of writing safe unsafe code. At the moment
it is just designed to be a better place for the `asm!()` docs than the
detailed release notes wiki page, and I took the time to write up some
other things.
More examples are needed, especially of things that can subtly go wrong;
and vast areas of `unsafe`-ty aren't covered, e.g. `static mut`s and
thread-safety in general.
This commit shreds all remnants of libextra from the compiler and standard
distribution. Two modules, c_vec/tempfile, were moved into libstd after some
cleanup, and the other modules were moved to separate crates as seen fit.
Closes#8784Closes#12413Closes#12576
This enables the lowering of llvm 64b intrinsics to hardware ops, resolving issues around `__kernel_cmpxchg64` on older kernels on ARM devices, and also enables use of the hardware floating point unit, resolving https://github.com/mozilla/rust/issues/10482.
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
Closes#12803 (std: Relax an assertion in oneshot selection) r=brson
Closes#12818 (green: Fix a scheduler assertion on yielding) r=brson
Closes#12819 (doc: discuss try! in std::io) r=alexcrichton
Closes#12820 (Use generic impls for `Hash`) r=alexcrichton
Closes#12826 (Remove remaining nolink usages) r=alexcrichton
Closes#12835 (Emacs: always jump the cursor if needed on indent) r=brson
Closes#12838 (Json method cleanup) r=alexcrichton
Closes#12843 (rustdoc: whitelist the headers that get a § on hover) r=alexcrichton
Closes#12844 (docs: add two unlisted libraries to the index page) r=pnkfelix
Closes#12846 (Added a test that checks that unary structs can be mutably borrowed) r=sfackler
Closes#12847 (mk: Fix warnings about duplicated rules) r=nmatsakis
This functionality is not super-core and so doesn't need to be included
in std. It's possible that std may need rand (it does a little bit now,
for io::test) in which case the functionality required could be moved to
a secret hidden module and reexposed by librand.
Unfortunately, using #[deprecated] here is hard: there's too much to
mock to make it feasible, since we have to ensure that programs still
typecheck to reach the linting phase.
- remove `node.js` dep., it has no effect as of #12747 (1)
- switch between LaTeX compilers, some cleanups
- CSS: fixup the print stylesheet, refactor highlighting code (2)
(1): `prep.js` outputs its own HTML directives, which `pandoc` cannot recognize when converting the document into LaTeX (this is why the PDF docs have never been highlighted as of now).
Note that if we were to add the `.rust` class to snippets, we could probably use pandoc's native highlighting capatibilities i.e. Kate ([here is](http://adrientetar.github.io/rust-tuts/tutorial/tutorial.pdf) an example of that).
(2): the only real highlighting change is for lifetimes which are now brown instead of red, the rest is just refactor of twos shades of red that look the same.
Also I made numbers highlighting for src in rustdoc a tint more clear so that it is less bothering.
@alexcrichton, @huonw
Closes#9873. Closes#12788.
Work towards #9876.
Several minor things here:
* Fix the `need_ok` function in `configure`
* Install man pages with non-executable permissions
* Use the correct directory for man pages when installing (this was a recent regression)
* Put all distributables in a new `dist/` directory in the build directory (there are soon to be significantly more of these)
Finally, this also creates a new, more precise way to install and uninstall Rust's files, the `install.sh` script, and creates a build target (currently `dist-tar-bins`) that creates a binary tarball containing all the installable files, boilerplate and license docs, and `install.sh`.
This binary tarball is the lowest-common denominator way to install Rust on Unix. We'll use it as the default installer on Linux (OS X will use .pkg).
## How `install.sh` works
* First, the makefiles (`prepare.mk` and `dist.mk`) put all the stuff that needs to be installed in a new directory in `dist/`.
* Then it puts `install.sh` in that same directory and a list of all the files to install at `rustlib/manifest`.
* Then the directory can be packaged and distributed.
* When `install.sh` runs it does some sanity checking then copies everything in the manifest to the install prefix, then copies the manifest as well.
* When `install.sh` runs again in the future it first looks for the existing manifest at the install prefix, and if it exists deletes everything in it. This is how the core distribution is upgraded - cargo is responsible for the rest.
* `install.sh --uninstall` will uninstall Rust
## Future work:
* Modify `install.sh` to accept `--man-dir` etc
* Rewrite `install.mk` to delegate to `install.sh`
* Investigate how `install.sh` does or doesn't work with .pkg on Mac
* Modify `dist.mk` to create `.pkg` files for all hosts
* Possibly use [makeself](http://www.megastep.org/makeself/) to create self-extracting installers
* Modify dist-snap bots run on mac as well, uploading binary tarballs and .pkg files for the four combos of linux, mac, x86, and x86_64.
* Adjust build system to be able to augment versions with '-nightly'
* Adjust build system to name dist artifacts without version numbers e.g. `rust-nightly-...pkg`. This is so we don't leave a huge trail of old nightly binaries on S3 - they just get overwritten.
* Create new dist-nightly builder
* Give the build master a new cron job to push to dist-nightly every night
* Add docs to distributables
* Update README.md to reflect the new reality
* Modernize the website to promote new installers
`prep.js` outputs its own HTML directives, which `pandoc` cannot
recognize when converting the document into LaTeX (this is why the
PDF docs have never been highlighted as of now).
Note that if we were to add the `.rust` class to snippets, we could
probably use pandoc's native highlighting capatibilities i.e. Kate.
This restores the old behaviour (as compared to building PDF versions of
all standalone docs), because some of the guides use unicode characters,
which seems to make pdftex unhappy.
parsing limitations.
Sundown parses
```
~~~
as a valid codeblock (i.e. mismatching delimiters), which made using
rustdoc on its own documentation impossible (since it used nested
codeblocks to demonstrate how testable codesnippets worked).
This modifies those snippets so that they're delimited by indentation,
but this then means they're tested by `rustdoc --test` & rendered as
Rust code (because there's no way to add `notrust` to
indentation-delimited code blocks). A comment is added to stop the
compiler reading the text too closely, but this unfortunately has to be
visible in the final docs, since that's the text on which the
highlighting happens.
E.g. this stops check-...-doc rules for `rustdoc.md` and `librustdoc`
from stamping on each other, so that they are correctly built and
tested. (Previously only the rustdoc crate was tested.)
This converts it to be very similar to crates.mk, with a single list of
the documentation items creating all the necessary bits and pieces.
Changes include:
- rustdoc is used to render HTML & test standalone docs
- documentation building now obeys NO_REBUILD=1
- testing standalone docs now obeys NO_REBUILD=1
- L10N is slightly less broken (in particular, it shares dependencies
and code with the rest of the code)
- PDFs can be built for all documentation items, not just tutorial and
manual
- removes the obsolete & unused extract-tests.py script
- adjust the CSS for standalone docs to use the rustdoc syntax
highlighting
This new SVH is used to uniquely identify all crates as a snapshot in time of
their ABI/API/publicly reachable state. This current calculation is just a hash
of the entire crate's AST. This is obviously incorrect, but it is currently the
reality for today.
This change threads through the new Svh structure which originates from crate
dependencies. The concept of crate id hash is preserved to provide efficient
matching on filenames for crate loading. The inspected hash once crate metadata
is opened has been changed to use the new Svh.
The goal of this hash is to identify when upstream crates have changed but
downstream crates have not been recompiled. This will prevent the def-id drift
problem where upstream crates were recompiled, thereby changing their metadata,
but downstream crates were not recompiled.
In the future this hash can be expanded to exclude contents of the AST like doc
comments, but limitations in the compiler prevent this change from being made at
this time.
Closes#10207
The compiler itself doesn't necessarily need any features of green threading
such as spawning tasks and lots of I/O, so libnative is slightly more
appropriate for rustc to use itself.
This should also help the rusti bot which is currently incompatible with libuv.
tidy has some limitations (e.g. the "checked in binaries" check doesn't
and can't actually check git), and so it's useful to run tests without
running tidy occasionally.
This trades an O(n) allocation + memcpy for a O(1) proc allocation (for
the destructor). Most users only need &[u8] anyway (all of the users in
the main repo), and so this offers large gains.
These two containers are indeed collections, so their place is in
libcollections, not in libstd. There will always be a hash map as part of the
standard distribution of Rust, but by moving it out of the standard library it
makes libstd that much more portable to more platforms and environments.
This conveniently also removes the stuttering of 'std::hashmap::HashMap',
although 'collections::HashMap' is only one character shorter.
Two optimizations:
1. Compress `foo.bc` in each rlib with `flate`. These are just taking up space and are only used with LTO, no need for LTO to be speedy.
2. Stop install `librustc.rlib` and friends, this is a *huge* source of bloat. There's no need for us to install static libraries for these components.
cc #12440
tidy has some limitations (e.g. the "checked in binaries" check doesn't
and can't actually check git), and so it's useful to run tests without
running tidy occasionally.
You rarely want to statically link against librustc and friends, so there's no
real reason to install the rlib version of these libraries, especially because
the rlibs are massive.
LLVM's tools are not contained in the local directory if --llvm-root is used by
the ./configure script. This fixes the installation path to be the root provided
by --llvm-root.
The new methodology can be found in the re-worded comment, but the gist of it is
that -C prefer-dynamic doesn't turn off static linkage. The error messages
should also be a little more sane now.
Closes#12133
The new methodology can be found in the re-worded comment, but the gist of it is
that -C prefer-dynamic doesn't turn off static linkage. The error messages
should also be a little more sane now.
Closes#12133
Work toward #9876.
This adds `prepare.mk`, which is simply a more heavily-parameterized `install.mk`, then uses `prepare` to implement both `install` and the windows installer (`dist`). Smoke tested on both Linux and Windows.
Because the build system treats Makefile.in and the .mk files slightly
differently (.in is copied, .mk are included), this makes the system
more uniform. Fewer build system changes will require a complete
reconfigure.
Currently when you run `make -jN` it's likely that you'll remove compiler-rt and
then it won't get cp'd back into the right place. I believe the reason for this
is that the compiler-rt library target never got updated so make decided it
never needed to copy the files back into place. The files were all there at the
beginning of `make`, but then we may clean out the stage0 versions if we unzip
the snapshot again.
Includes an upstream commit by pcwalton to improve codegen of our enums getting
moved around.
This also introduces a new commit on top of our stack of patches to fix a mingw32 build issue. I have submitted the patch upstream: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140210/204653.html
I verified that this builds on the try bots, which amazes me because I think that c++11 is turned on now, but I guess we're still lucky!
Closes#10613 (pcwalton's patch landed)
Closes#11992 (llvm has removed these options)
Two unfortunate allocations were wrapping a proc() in a proc() with
GreenTask::build_start_wrapper, and then boxing this proc in a ~proc() inside of
Context::new(). Both of these allocations were a direct result from two
conditions:
1. The Context::new() function has a nice api of taking a procedure argument to
start up a new context with. This inherently required an allocation by
build_start_wrapper because extra code needed to be run around the edges of a
user-provided proc() for a new task.
2. The initial bootstrap code only understood how to pass one argument to the
next function. By modifying the assembly and entry points to understand more
than one argument, more information is passed through in registers instead of
allocating a pointer-sized context.
This is sadly where I end up throwing mips under a bus because I have no idea
what's going on in the mips context switching code and don't know how to modify
it.
Closes#7767
cc #11389
Previously crates like `green` and `native` would still depend on their
parents when running `make check-stage2-green NO_REBUILD=1`, this
ensures that they only depend on their source files.
Also, apply NO_REBUILD to the crate doc tests, so, for example,
`check-stage2-doc-std` will use an already compiled `rustdoc` directly.
These are ancient. I removed a bunch of questions that are less relevant - or completely unrelevant, updated other entries, and removed things that are already better expressed elsewhere.
libextra is currently being split into several crates. This commit adds
them all to the dist target in order to have them in the final tarballs.
Signed-off-by: Luca Bruno <lucab@debian.org>
src/README.txt has been renamed in a30d61b05a, make dist is
thus failing as unable to find it.
This commit makes the dist target working again.
Signed-off-by: Luca Bruno <lucab@debian.org>
Part of #8784
Changes:
- Everything labeled under collections in libextra has been moved into a new crate 'libcollection'.
- Renamed container.rs to deque.rs, since it was no longer 'container traits for extra', just a deque trait.
- Crates that depend on the collections have been updated and dependencies sorted.
- I think I changed all the imports in the tests to make sure it works. I'm not entirely sure, as near the end of the tests there was yet another `use` that I forgot to change, and when I went to try again, it started rebuilding everything, which I don't currently have time for.
There will probably be incompatibility between this and the other pull requests that are splitting up libextra. I'm happy to rebase once those have been merged.
The tests I didn't get to run should pass. But I can redo them another time if they don't.
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all options
stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the directory of
the crate file.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all
options stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the current
directory of the process.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
- `extra::json` didn't make the cut, because of `extra::json` required
dep on `extra::TreeMap`. If/when `extra::TreeMap` moves out of `extra`,
then `extra::json` could move into `serialize`
- `libextra`, `libsyntax` and `librustc` depend on the newly created
`libserialize`
- The extensions to various `extra` types like `DList`, `RingBuf`, `TreeMap`
and `TreeSet` for `Encodable`/`Decodable` were moved into the respective
modules in `extra`
- There is some trickery, evident in `src/libextra/lib.rs` where a stub
of `extra::serialize` is set up (in `src/libextra/serialize.rs`) for
use in the stage0 build, where the snapshot rustc is still making
deriving for `Encodable` and `Decodable` point at extra. Big props to
@huonw for help working out the re-export solution for this
extra: inline extra::serialize stub
fix stuff clobbered in rebase + don't reexport serialize::serialize
no more globs in libserialize
syntax: fix import of libserialize traits
librustc: fix bad imports in encoder/decoder
add serialize dep to librustdoc
fix failing run-pass tests w/ serialize dep
adjust uuid dep
more rebase de-clobbering for libserialize
fixing tests, pushing libextra dep into cfg(test)
fix doc code in extra::json
adjust index.md links to serialize and uuid library
Previously, the check-fast and check-lite test suites weren't picking up all
target crates, rather just std/extra. In order to ensure that all of our crates
work on windows, I've modified these rules to build the test suites for all
TARGET_CRATES members. Note that this still excludes rustc/syntax/rustdoc.
In line with the dissolution of libextra - #8784 - this moves arena and glob into
their own respective modules. Updates .gitignore with the entries
doc/{arena,glob} in accordance.
This changes android testing to upload *all* target crates rather than just a
select subset. This should unblock #11867 which is introducing a libglob
dependency in testing.
This changes android testing to upload *all* target crates rather than just a
select subset. This should unblock #11867 which is introducing a libglob
dependency in testing.
In line with the dissolution of libextra - #8784 - moves arena to its own library libarena.
Changes based on PR #11787. Updates .gitignore to ignore doc/arena.
It was decided a long, long time ago that libextra should not exist, but rather its modules should be split out into smaller independent libraries maintained outside of the compiler itself. The theory was to use `rustpkg` to manage dependencies in order to move everything out of the compiler, but maintain an ease of usability.
Sadly, the work on `rustpkg` isn't making progress as quickly as expected, but the need for dissolving libextra is becoming more and more pressing. Because of this, we've thought that a good interim solution would be to simply package more libraries with the rust distribution itself. Instead of dissolving libextra into libraries outside of the mozilla/rust repo, we can dissolve libraries into the mozilla/rust repo for now.
Work on this has been excruciatingly painful in the past because the makefiles are completely opaque to all but a few. Adding a new library involved adding about 100 lines spread out across 8 files (incredibly error prone). The first commit of this pull request targets this pain point. It does not rewrite the build system, but rather refactors large portions of it. Afterwards, adding a new library is as simple as modifying 2 lines (easy, right?). The build system automatically keeps track of dependencies between crates (rust *and* native), promotes binaries between stages, tracks dependencies of installed tools, etc, etc.
With this newfound buildsystem power, I chose the `extra::flate` module as the first candidate for removal from libextra. While a small module, this module is relative complex in that is has a C dependency and the compiler requires it (messing with the dependency graph a bit). Albeit I modified more than 2 lines of makefiles to accomodate libflate (the native dependency required 2 extra lines of modifications), but the removal process was easy to do and straightforward.
---
Testing-wise, I've cross-compiled, run tests, built some docs, installed, uninstalled, etc. I'm still working out a few kinks, and I'm sure that there's gonna be built system issues after this, but it should be working well for basic use!
cc #8784
This is hopefully the beginning of the long-awaited dissolution of libextra.
Using the newly created build infrastructure for building libraries, I decided
to move the first module out of libextra.
While not being a particularly meaty module in and of itself, the flate module
is required by rustc and additionally has a native C dependency. I was able to
very easily split out the C dependency from rustrt, update librustc, and
magically everything gets installed to the right locations and built
automatically.
This is meant to be a proof-of-concept commit to how easy it is to remove
modules from libextra now. I didn't put any effort into modernizing the
interface of libflate or updating it other than to remove the one glob import it
had.
Before this patch, if you wanted to add a crate to the build system you had to
change about 100 lines across 8 separate makefiles. This is highly error prone
and opaque to all but a few. This refactoring is targeted at consolidating this
effort so adding a new crate adds one line in one file in a way that everyone
can understand it.
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
The official documentation sorely needs an explanation of the rust runtime and what it is exactly, and I want this guide to provide that information.
I'm unsure of whether I've been too light on some topics while too heavy on others. I also feel like a few things are still missing. As always, feedback is appreciated, especially about things you'd like to see written about!
If we bootstrap a cross compile from a stage1 compiler, then the stage1 compiler
already knows about the rustc => rustlib change, so we need to not add the extra
flag if it's a stage0 version of a target from a stage1 of another target.
If we bootstrap a cross compile from a stage1 compiler, then the stage1 compiler
already knows about the rustc => rustlib change, so we need to not add the extra
flag if it's a stage0 version of a target from a stage1 of another target.
This reorganizes the documentation index to be more focused on the in-tree docs, and to clean up the style, and it also adds @steveklabnik's pointer guide.
All the copying of files amongst one another was apparently causing something to
get corrupted. Instead of having files fly around, just update the directories
to link to.
The makefiles and the windows installer disagree on the name of this file. In practical terms this change only means that the '-pre' installers will be named 'rust-0.9-pre-install.exe' instead 'rust-0.9-install.exe'.
This is not done yet but I'm posting it to get feedback.
The wiki has a ton of different tutorials/manuals/faq and so forth. Instead of migrating all of them right now, I just migrated the following:
* The general main wiki page
* Language FAQ
* Project FAQ
If this feels reasonable, please comment so that I can continue with confidence.
Ensure configure creates doc/guides directory
Fix configure makefile and tests
Remove old guides dir and configure option, convert testing to guide
Remove ignored files
Fix submodule issue
prepend dir in makefile so that bor knows how to build the docs
S to uppercase