This does the bare minimum to make registration of error codes work again. After this patch, every call to `span_err!` with an error code gets that error code validated against a list in that crate and a new tidy script `errorck.py` validates that no error codes are duplicated globally.
There are further improvements to be made yet, detailed in #19624.
r? @nikomatsakis
The script is intended as a tool for doing every sort of verifications
amenable to Rustdoc's HTML output. For example, link checkers would go
to this script. It already parses HTML into a document tree form (with
a slight caveat), so future tests can make use of it.
As an example, relevant `rustdoc-*` run-make tests have been updated
to use `htmldocck.py` and got their `verify.sh` removed. In the future
they may go to a dedicated directory with htmldocck running by default.
The detailed explanation of test scripts is provided as a docstring of
htmldocck.
cc #19723
This pulls all of our long-form documentation into a single document,
nicknamed "the book" and formally titled "The Rust Programming
Language."
A few things motivated this change:
* People knew of The Guide, but not the individual Guides. This merges
them together, helping discoverability.
* You can get all of Rust's longform documentation in one place, which
is nice.
* We now have rustbook in-tree, which can generate this kind of
documentation. While its style is basic, the general idea is much
better: a table of contents on the left-hand side.
* Rather than a almost 10,000-line guide.md, there are now smaller files
per section.
This removes a large array of deprecated functionality, regardless of how
recently it was deprecated. The purpose of this commit is to clean out the
standard libraries and compiler for the upcoming alpha release.
Some notable compiler changes were to enable warnings for all now-deprecated
command line arguments (previously the deprecated versions were silently
accepted) as well as removing deriving(Zero) entirely (the trait was removed).
The distribution no longer contains the libtime or libregex_macros crates. Both
of these have been deprecated for some time and are available externally.
Enable parallel codegen (2 units) by default when --opt-level is 0 or 1. This
gives a minor speedup on large crates (~10%), with only a tiny slowdown (~2%)
for small ones (which usually build in under a second regardless). The current
default (no parallelization) is used when the user requests optimization
(--opt-level 2 or 3), and when the user has enabled LTO (which is incompatible
with parallel codegen).
This commit also changes the rust build system to use parallel codegen
when appropriate. This means codegen-units=4 for stage0 always, and
also for stage1 and stage2 when configured with --disable-optimize.
(Other settings use codegen-units=1 for stage1 and stage2, to get
maximum performance for release binaries.) The build system also sets
codegen-units=1 for compiletest tests (compiletest does its own
parallelization) and uses the same setting as stage2 for crate tests.
----
To reproduce issue on commit ba246100ca
it does not suffice to add just `check-build-compiletest` to
`check-secondary`; one must also ensure that `check-build-compiletest`
precedes the satisification of the `check` rule.
Otherwise hidden dependencies of `compiletest` would end up getting
satisfied when make builds `rustc` at each stage in order to
eventually run `check-stage2`.
So to handle that I moved `check-secondary` before `check` in the
`check-all` rule that bors uses, and for good measure, I also put
`check-build-compiltest` at the front of the `check-secondary` rule's
dependencies.
My understanding is that running `check-secondary` should be
relatively cheap, and thus such a reordering will not hurt bors.
----
Fix#17883.
We'll use this to run a subset of the test suite onto a dedicated
bot.
This puts the grammar tests and the pretty-printer tests under
check-secondary. It leanves the pretty tests under plain `check`
for now, until the new bot is added to take over.
Because check-secondary is not run as part of `make check` there
will be a set of tests that most users never run and are only
checked by bors. I think this will be ok because grammar tests
should rarely regress, and the people regressing such tests
should have the fortitude to deal with it.
Add libunicode; move unicode functions from core
- created new crate, libunicode, below libstd
- split `Char` trait into `Char` (libcore) and `UnicodeChar` (libunicode)
- Unicode-aware functions now live in libunicode
- `is_alphabetic`, `is_XID_start`, `is_XID_continue`, `is_lowercase`,
`is_uppercase`, `is_whitespace`, `is_alphanumeric`, `is_control`, `is_digit`,
`to_uppercase`, `to_lowercase`
- added `width` method in UnicodeChar trait
- determines printed width of character in columns, or None if it is a non-NULL control character
- takes a boolean argument indicating whether the present context is CJK or not (characters with 'A'mbiguous widths are double-wide in CJK contexts, single-wide otherwise)
- split `StrSlice` into `StrSlice` (libcore) and `UnicodeStrSlice` (libunicode)
- functionality formerly in `StrSlice` that relied upon Unicode functionality from `Char` is now in `UnicodeStrSlice`
- `words`, `is_whitespace`, `is_alphanumeric`, `trim`, `trim_left`, `trim_right`
- also moved `Words` type alias into libunicode because `words` method is in `UnicodeStrSlice`
- unified Unicode tables from libcollections, libcore, and libregex into libunicode
- updated `unicode.py` in `src/etc` to generate aforementioned tables
- generated new tables based on latest Unicode data
- added `UnicodeChar` and `UnicodeStrSlice` traits to prelude
- libunicode is now the collection point for the `std::char` module, combining the libunicode functionality with the `Char` functionality from libcore
- thus, moved doc comment for `char` from `core::char` to `unicode::char`
- libcollections remains the collection point for `std::str`
The Unicode-aware functions that previously lived in the `Char` and `StrSlice` traits are no longer available to programs that only use libcore. To regain use of these methods, include the libunicode crate and `use` the `UnicodeChar` and/or `UnicodeStrSlice` traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar and UnicodeStrSlice have been added to the prelude.
closes#15224
[breaking-change]
- unicode tests live in coretest crate
- libcollections str tests need UnicodeChar trait.
- libregex perlw tests were checking a char in the Alphabetic category,
\x2161. Confirmed perl 5.18 considers this a \w character. Changed to
\x2961, which is not \w as the test expects.
Libcore's test infrastructure is complicated by the fact that many lang
items are defined in the crate. The current approach (realcore/realstd
imports) is hacky and hard to work with (tests inside of core::cmp
haven't been run for months!).
Moving tests to a separate crate does mean that they can only test the
public API of libcore, but I don't feel that that is too much of an
issue. The only tests that I had to get rid of were some checking the
various numeric formatters, but those are also exercised through normal
format! calls in other tests.
Closes#14888 (Allow disabling jemalloc as the memory allocator)
Closes#14905 (rustc: Improve span for error about using a method as a field.)
Closes#14920 (Fix#14915)
Closes#14924 (Add a Syntastic plugin for Rust.)
Closes#14935 (debuginfo: Correctly handle indirectly recursive types)
Closes#14938 (Reexport `num_cpus` in `std::os`. Closes#14707)
Closes#14941 (std: Don't fail the task when a Future is dropped)
Closes#14942 (rustc: Don't mark type parameters as exported)
Closes#14943 (doc: Fix a link in the FAQ)
Closes#14944 (Update "use" to "uses" on ln186)
Closes#14949 (Update repo location)
Closes#14950 (fix typo in the libc crate)
Closes#14951 (Update Sublime Rust github link)
Closes#14953 (Fix --disable-rpath and tests)
This involved a few changes to the local build system:
* Makefiles now prefer our own LD_LIBRARY_PATH over the user's LD_LIBRARY_PATH
in order to support building rust with rust already installed.
* The compiletest program was taught to correctly pass through the aux dir as a
component of LD_LIBRARY_PATH in more situations.
This change was spliced out of #14832 to consist of just the fixes to running
tests without an rpath setting embedded in executables.
Two line summary: Distinguish HOST_RPATH and TARGET_RPATH; added
RPATH_LINK_SEARCH; skip tests broken in stage1; general cleanup.
`HOST_RPATH_VAR$(1)_T_$(2)_H_$(3)` and `TARGET_RPATH_VAR$(1)_T_$(2)_H_$(3)`
both match the format of the old `RPATH_VAR$(1)_T_$(2)_H_$(3)` (which
is still being set the same way that it was before, to one of either
HOST/TARGET depending on what stage we are building). Namely, the format
is <XXX>_RPATH_VAR = "<LD_LIB_PATH_ENVVAR>=<COLON_SEP_PATH_ENTRIES>"
What this commit does:
* Pass both of the (newly introduced) HOST and TARGET rpath setup vars
to `maketest.py`
* Update `maketest.py` to no longer update the LD_LIBRARY_PATH itself
Instead, it passes along the HOST and TARGET rpath setup vars in
environment variables `HOST_RPATH_ENV` and `TARGET_RPATH_ENV`
* Also, pass the current stage number to maketest.py; it in turn
passes it (via an env var) to run-make tests.
This allows the run-make tests to selectively change behavior
(e.g. turn themselves off) to deal with incompatibilities with
e.g. stage1.
* Cleanup: Distinguish in tools.mk between the command to run (`RUN`)
and the file to generate to drive that command (`RUN_BINFILE`). The
main thing this enables is that `RUN` can now setup the
`TARGET_RPATH_ENV` without having to dirty up the runner code in
each of the `run-make` Makefiles.
* Cleanup: Factored out commands to delete dylib/rlib into
REMOVE_DYLIBS/REMOVE_RLIBS.
There were places where we were only calling `rm $(call DYLIB,foo)`
even though we really needed to get rid of the whole glob (at least
based on alex's findings on #13753 that removing the symlink does not
suffice).
Therefore rather than peppering the code with the awkward
`rm $(TMPDIR)/$(call DYLIB_GLOB,foo)`, I instead introduced a common
`REMOVE_DYLIBS` user function that expands into that when called.
After I adding an analogous `REMOVE_RLIBS`, I changed all of the
existing calls that rm dylibs or rlibs to use these routines
instead.
Note that the latter is not a true refactoring since I may have
changed cases where it was our intent to only remove the sym-link.
(But if that is the case, then we need to more deeply investigate
alex's findings on #13753 where the system was still dynamically
loading up the non-symlinked libraries that it finds on the load
path.)
* Added RPATH_LINK_SEARCH command and use it on Linux.
On some platforms, namely Linux, when you have libboot.so that has
its internal rpath set (to e.g. $(ORIGIN)/path/to/HOSTDIR), the
linker still complains when you do the link step and it does not
know where to find libraries that libboot.so depends upon that live
in HOSTDIR (think e.g. librustuv.so).
As far as I can tell, the GNU linker will consult the
LD_LIBRARY_PATH as part of the linking process to find such
libraries. But if you want to be more careful and not override
LD_LIBRARY_PATH for the `gcc` invocation, then you need some other
way to tell the linker where it can find the libraries that
libboot.so needs. The solution to this on Linux is the
`-Wl,-rpath-link` command line option.
However, this command line option does not exist on Mac OS X, (which
appears to be figuring out how to resolve the libboot.dylib
dependency by some other means, perhaps by consulting the rpath
setting within libboot.dylib).
So, in order to abstract over this distinction, I added the
RPATH_LINK_SEARCH macro to the run-make infrastructure and added
calls to it where necessary to get Linux working. On architectures
other than Linux, the macro expands to nothing.
* Disable miscellaneous tests atop stage1.
* An especially interesting instance of the previous bullet point:
Excuse regex from doing rustdoc tests atop stage1.
This was a (nearly-) final step to get `make check-stage1` working
again.
The use of a special-case check for regex here is ugly but is
analogous other similar checks for regex such as the one that landed
in PR #13844.
The way this is written, the user will get a reminder that
doc-crate-regex is being skipped whenever their rules attempt to do
the crate documentation tests. This is deliberate: I want people
running `make check-stage1` to be reminded about which cases are
being skipped. (But if such echo noise is considered offensive, it
can obviously be removed.)
* Got windows working with the above changes.
This portion of the commit is a cleanup revision of the (previously
mentioned on try builds) re-architecting of how the LD_LIBRARY_PATH
setup and extension is handled in order to accommodate Windows' (1.)
use of `$PATH` for that purpose and (2.) use of spaces in `$PATH`
entries (problematic for make and for interoperation with tools at
the shell).
* In addition, since the code has been rearchitected to pass the
HOST_RPATH_DIR/TARGET_RPATH_DIR rather than a whole sh
environment-variable setting command, there is no need to for the
convert_path_spec calls in maketest.py, which in fact were put in
place to placate Windows but were now causing the Windows builds to
fail. Instead we just convert the paths to absolute paths just like
all of the other path arguments.
Also, note for makefile hackers: apparently you cannot quote operands
to `ifeq` in Makefile (or at least, you need to be careful about
adding them, e.g. to only one side).
The current suite of benchmarks for the standard distribution take a significant
amount of time to run, but it's unclear whether we're gaining any benefit from
running them. Some specific pain points:
* No one is looking at the data generated by the benchmarks. We have no graphs
or analysis of what's happening, so all the data is largely being cast into
the void.
* No benchmark has ever uncovered a bug, they have always run successfully.
* Benchmarks not only take a significant amount of time to run, but also take a
significant amount of time to compile. I don't think we should mitigate this
for now because it's useful to ensure that they do indeed still compile.
This commit disables --bench test runs by default as part of `make check`,
flipping the NO_BENCH environment variable to a PLEASE_BENCH variable which will
manually enable benchmarking. If and when a dedicated bot is set up for
benchmarking, this flag can be enabled on that bot.
There's no need to include this specific flag just for android. We can
already deal with what it tries to solve by using -C linker=/path/to/cc
and -C ar=/path/to/ar. The Makefiles for rustc already set this up when
we're crosscompiling.
I did add the flag to compiletest though so it can find gdb. Though, I'm
pretty sure we don't run debuginfo tests on android anyways right now.
[breaking-change]
This adds a `std::rt::heap` module with a nice allocator API. It's a
step towards fixing #13094 and is a starting point for working on a
generic allocator trait.
The revision used for the jemalloc submodule is the stable 3.6.0 release.
Closes#11807
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
There is currently not much precedent for target crates requiring syntax
extensions to compile their test versions. This dependency is possible, but
can't be encoded through the normal means of DEPS_regex because it is a
test-only dependency and it must be a *host* dependency (it's a syntax
extension).
Closes#13844
Compile-fail tests for syntax extensions belong in this suite which has correct
dependencies on all artifacts rather than just the target artifacts.
Closes#13818
This adds the target triple to the crate metadata.
When searching for a crate the phase (link, syntax) is taken into account.
During link phase only crates matching the target triple are considered.
During syntax phase, either the target or host triple will be accepted, unless
the crate defines a macro_registrar, in which case only the host triple will
match.
First, documented the existing `CTEST_DISABLE_$(TEST_GROUP)` pattern
for conditionally disabling tests based on missing host features.
Added variant of above, `CTEST_DISABLE_NONSELFHOST_$(TEST_GROUP)`,
which is only queried in contexts where the target is not on the
CFG_HOST list (which I interpret as the list of targets that our host
can compatibly emulate; e.g. the example that i686 and x86_64 can in
theory run each others' tests).
Driveby fix: Remove redundant copy of
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec dependency declaration.
These syntax extensions need a place to be documented, and this starts passing a
`--cfg dox` parameter to `rustdoc` when building and testing documentation in
order to document macros so that they have no effect on the compiled crate, but
only documentation.
Closes#5605
1. Fix a long-standing typo in the makefile: the relevant
CTEST_NAME here is `rpass-full` (with a dash), not
`rpass_full`.
2. The rpass-full tests depend on the complete set of target
libraries. Therefore, the rpass-full tests need to use
the dependencies held in the CSREQ-prefixed variable, not
the TLIBRUSTC_DEFAULT-prefixed variable.
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
E.g. this stops check-...-doc rules for `rustdoc.md` and `librustdoc`
from stamping on each other, so that they are correctly built and
tested. (Previously only the rustdoc crate was tested.)
This converts it to be very similar to crates.mk, with a single list of
the documentation items creating all the necessary bits and pieces.
Changes include:
- rustdoc is used to render HTML & test standalone docs
- documentation building now obeys NO_REBUILD=1
- testing standalone docs now obeys NO_REBUILD=1
- L10N is slightly less broken (in particular, it shares dependencies
and code with the rest of the code)
- PDFs can be built for all documentation items, not just tutorial and
manual
- removes the obsolete & unused extract-tests.py script
- adjust the CSS for standalone docs to use the rustdoc syntax
highlighting
tidy has some limitations (e.g. the "checked in binaries" check doesn't
and can't actually check git), and so it's useful to run tests without
running tidy occasionally.
The new methodology can be found in the re-worded comment, but the gist of it is
that -C prefer-dynamic doesn't turn off static linkage. The error messages
should also be a little more sane now.
Closes#12133
Previously crates like `green` and `native` would still depend on their
parents when running `make check-stage2-green NO_REBUILD=1`, this
ensures that they only depend on their source files.
Also, apply NO_REBUILD to the crate doc tests, so, for example,
`check-stage2-doc-std` will use an already compiled `rustdoc` directly.
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This commit removes the -c, --emit-llvm, -s, --rlib, --dylib, --staticlib,
--lib, and --bin flags from rustc, adding the following flags:
* --emit=[asm,ir,bc,obj,link]
* --crate-type=[dylib,rlib,staticlib,bin,lib]
The -o option has also been redefined to be used for *all* flavors of outputs.
This means that we no longer ignore it for libraries. The --out-dir remains the
same as before.
The new logic for files that rustc emits is as follows:
1. Output types are dictated by the --emit flag. The default value is
--emit=link, and this option can be passed multiple times and have all
options stacked on one another.
2. Crate types are dictated by the --crate-type flag and the #[crate_type]
attribute. The flags can be passed many times and stack with the crate
attribute.
3. If the -o flag is specified, and only one output type is specified, the
output will be emitted at this location. If more than one output type is
specified, then the filename of -o is ignored, and all output goes in the
directory that -o specifies. The -o option always ignores the --out-dir
option.
4. If the --out-dir flag is specified, all output goes in this directory.
5. If -o and --out-dir are both not present, all output goes in the current
directory of the process.
6. When multiple output types are specified, the filestem of all output is the
same as the name of the CrateId (derived from a crate attribute or from the
filestem of the crate file).
Closes#7791Closes#11056Closes#11667
Previously, the check-fast and check-lite test suites weren't picking up all
target crates, rather just std/extra. In order to ensure that all of our crates
work on windows, I've modified these rules to build the test suites for all
TARGET_CRATES members. Note that this still excludes rustc/syntax/rustdoc.
This changes android testing to upload *all* target crates rather than just a
select subset. This should unblock #11867 which is introducing a libglob
dependency in testing.
This is hopefully the beginning of the long-awaited dissolution of libextra.
Using the newly created build infrastructure for building libraries, I decided
to move the first module out of libextra.
While not being a particularly meaty module in and of itself, the flate module
is required by rustc and additionally has a native C dependency. I was able to
very easily split out the C dependency from rustrt, update librustc, and
magically everything gets installed to the right locations and built
automatically.
This is meant to be a proof-of-concept commit to how easy it is to remove
modules from libextra now. I didn't put any effort into modernizing the
interface of libflate or updating it other than to remove the one glob import it
had.
Before this patch, if you wanted to add a crate to the build system you had to
change about 100 lines across 8 separate makefiles. This is highly error prone
and opaque to all but a few. This refactoring is targeted at consolidating this
effort so adding a new crate adds one line in one file in a way that everyone
can understand it.
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
The official documentation sorely needs an explanation of the rust runtime and what it is exactly, and I want this guide to provide that information.
I'm unsure of whether I've been too light on some topics while too heavy on others. I also feel like a few things are still missing. As always, feedback is appreciated, especially about things you'd like to see written about!
This reorganizes the documentation index to be more focused on the in-tree docs, and to clean up the style, and it also adds @steveklabnik's pointer guide.
Ensure configure creates doc/guides directory
Fix configure makefile and tests
Remove old guides dir and configure option, convert testing to guide
Remove ignored files
Fix submodule issue
prepend dir in makefile so that bor knows how to build the docs
S to uppercase
This pull request extracts all scheduling functionality from libstd, moving it into its own separate crates. The new libnative and libgreen will be the new way in which 1:1 and M:N scheduling is implemented. The standard library still requires an interface to the runtime, however, (think of things like `std::comm` and `io::println`). The interface is now defined by the `Runtime` trait inside of `std::rt`.
The booting process is now that libgreen defines the start lang-item and that's it. I want to extend this soon to have libnative also have a "start lang item" but also allow libgreen and libnative to be linked together in the same process. For now though, only libgreen can be used to start a program (unless you define the start lang item yourself). Again though, I want to change this soon, I just figured that this pull request is large enough as-is.
This certainly wasn't a smooth transition, certain functionality has no equivalent in this new separation, and some functionality is now better enabled through this new system. I did my best to separate all of the commits by topic and keep things fairly bite-sized, although are indeed larger than others.
As a note, this is currently rebased on top of my `std::comm` rewrite (or at least an old copy of it), but none of those commits need reviewing (that will all happen in another pull request).
It only really makes sense to run tests for the build target anyway because it's
not guaranteed that you can execute other targets.
This is blocking the next snapshot
Right now multiple targets/hosts is broken because the libdir passed for all of
the LLVM libraries is for the wrong architecture. By using the right arch
(target, not host), everything is linked and assembled just fine.
In order to keep up to date with changes to the libraries that `llvm-config`
spits out, the dependencies to the LLVM are a dynamically generated rust file.
This file is now automatically updated whenever LLVM is updated to get kept
up-to-date.
At the same time, this cleans out some old cruft which isn't necessary in the
makefiles in terms of dependencies.
Closes#10745Closes#10744
CFG_BUILD_DIR, CFG_LLVM_SRC_DIR and CFG_SRC_DIR all have trailing
slashes, by definition, so this is correct.
(This is purely cosmetic; the doubled slash is ignored by all the tools we're using.)
This infrastructure is meant to support runnings tests that involve various
interesting interdependencies about the types of crates being linked or possibly
interacting with C libraries. The goal of these make tests is to not restrict
them to a particular test runner, but allow each test to run its own tests.
To this end, there is a new src/test/run-make directory which has sub-folders of
tests. Each test requires a `Makefile`, and running the tests constitues simply
running `make` inside the directory. The new target is `check-stageN-rmake`.
These tests will have the destination directory (as TMPDIR) and the local rust
compiler (as RUSTC) passed along to them. There is also some helpful
cross-platform utilities included in src/test/run-make/tools.mk to aid with
compiling C programs and running them.
The impetus for adding this new test suite is to allow various interesting forms
of testing rust linkage. All of the tests initially added are various flavors of
compiling Rust and C with one another as well as just making sure that rust
linkage works in general.
Closes#10434
This commit implements the support necessary for generating both intermediate
and result static rust libraries. This is an implementation of my thoughts in
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006686.html.
When compiling a library, we still retain the "lib" option, although now there
are "rlib", "staticlib", and "dylib" as options for crate_type (and these are
stackable). The idea of "lib" is to generate the "compiler default" instead of
having too choose (although all are interchangeable). For now I have left the
"complier default" to be a dynamic library for size reasons.
Of the rust libraries, lib{std,extra,rustuv} will bootstrap with an
rlib/dylib pair, but lib{rustc,syntax,rustdoc,rustpkg} will only be built as a
dynamic object. I chose this for size reasons, but also because you're probably
not going to be embedding the rustc compiler anywhere any time soon.
Other than the options outlined above, there are a few defaults/preferences that
are now opinionated in the compiler:
* If both a .dylib and .rlib are found for a rust library, the compiler will
prefer the .rlib variant. This is overridable via the -Z prefer-dynamic option
* If generating a "lib", the compiler will generate a dynamic library. This is
overridable by explicitly saying what flavor you'd like (rlib, staticlib,
dylib).
* If no options are passed to the command line, and no crate_type is found in
the destination crate, then an executable is generated
With this change, you can successfully build a rust program with 0 dynamic
dependencies on rust libraries. There is still a dynamic dependency on
librustrt, but I plan on removing that in a subsequent commit.
This change includes no tests just yet. Our current testing
infrastructure/harnesses aren't very amenable to doing flavorful things with
linking, so I'm planning on adding a new mode of testing which I believe belongs
as a separate commit.
Closes#552
This commit moves all thread-blocking I/O functions from the std::os module.
Their replacements can be found in either std::rt::io::file or in a hidden
"old_os" module inside of native::file. I didn't want to outright delete these
functions because they have a lot of special casing learned over time for each
OS/platform, and I imagine that these will someday get integrated into a
blocking implementation of IoFactory. For now, they're moved to a private module
to prevent bitrot and still have tests to ensure that they work.
I've also expanded the extensions to a few more methods defined on Path, most of
which were previously defined in std::os but now have non-thread-blocking
implementations as part of using the current IoFactory.
The api of io::file is in flux, but I plan on changing it in the next commit as
well.
Closes#10057