for `~str`/`~[]`.
Note that `~self` still remains, since I forgot to add support for
`Box<self>` before the snapshot.
How to update your code:
* Instead of `~EXPR`, you should write `box EXPR`.
* Instead of `~TYPE`, you should write `Box<Type>`.
* Instead of `~PATTERN`, you should write `box PATTERN`.
[breaking-change]
These syntax extensions need a place to be documented, and this starts passing a
`--cfg dox` parameter to `rustdoc` when building and testing documentation in
order to document macros so that they have no effect on the compiled crate, but
only documentation.
Closes#5605
This commit moves all logging out of the standard library into an external
crate. This crate is the new crate which is responsible for all logging macros
and logging implementation. A few reasons for this change are:
* The crate map has always been a bit of a code smell among rust programs. It
has difficulty being loaded on almost all platforms, and it's used almost
exclusively for logging and only logging. Removing the crate map is one of the
end goals of this movement.
* The compiler has a fair bit of special support for logging. It has the
__log_level() expression as well as generating a global word per module
specifying the log level. This is unfairly favoring the built-in logging
system, and is much better done purely in libraries instead of the compiler
itself.
* Initialization of logging is much easier to do if there is no reliance on a
magical crate map being available to set module log levels.
* If the logging library can be written outside of the standard library, there's
no reason that it shouldn't be. It's likely that we're not going to build the
highest quality logging library of all time, so third-party libraries should
be able to provide just as high-quality logging systems as the default one
provided in the rust distribution.
With a migration such as this, the change does not come for free. There are some
subtle changes in the behavior of liblog vs the previous logging macros:
* The core change of this migration is that there is no longer a physical
log-level per module. This concept is still emulated (it is quite useful), but
there is now only a global log level, not a local one. This global log level
is a reflection of the maximum of all log levels specified. The previously
generated logging code looked like:
if specified_level <= __module_log_level() {
println!(...)
}
The newly generated code looks like:
if specified_level <= ::log::LOG_LEVEL {
if ::log::module_enabled(module_path!()) {
println!(...)
}
}
Notably, the first layer of checking is still intended to be "super fast" in
that it's just a load of a global word and a compare. The second layer of
checking is executed to determine if the current module does indeed have
logging turned on.
This means that if any module has a debug log level turned on, all modules
with debug log levels get a little bit slower (they all do more expensive
dynamic checks to determine if they're turned on or not).
Semantically, this migration brings no change in this respect, but
runtime-wise, this will have a perf impact on some code.
* A `RUST_LOG=::help` directive will no longer print out a list of all modules
that can be logged. This is because the crate map will no longer specify the
log levels of all modules, so the list of modules is not known. Additionally,
warnings can no longer be provided if a malformed logging directive was
supplied.
The new "hello world" for logging looks like:
#[phase(syntax, link)]
extern crate log;
fn main() {
debug!("Hello, world!");
}
If no arguments are given to `vec!` then no pushes are emitted and
so the compiler (rightly) complains that the mutability of `temp` is
never used.
This behaviour is rather annoying for users.
Formatting via reflection has been a little questionable for some time now, and
it's a little unfortunate that one of the standard macros will silently use
reflection when you weren't expecting it. This adds small bits of code bloat to
libraries, as well as not always being necessary. In light of this information,
this commit switches assert_eq!() to using {} in the error message instead of
{:?}.
In updating existing code, there were a few error cases that I encountered:
* It's impossible to define Show for [T, ..N]. I think DST will alleviate this
because we can define Show for [T].
* A few types here and there just needed a #[deriving(Show)]
* Type parameters needed a Show bound, I often moved this to `assert!(a == b)`
* `Path` doesn't implement `Show`, so assert_eq!() cannot be used on two paths.
I don't think this is much of a regression though because {:?} on paths looks
awful (it's a byte array).
Concretely speaking, this shaved 10K off a 656K binary. Not a lot, but sometime
significant for smaller binaries.
I've been playing around with code size when linking to libstd recently, and these were some findings I found that really helped code size. I started out by eliminating all I/O implementations from libnative and instead just return an unimplemented error.
In doing so, a `fn main() {}` executable was ~378K before this patch, and about 170K after the patch. These size wins are all pretty minor, but they all seemed pretty reasonable to me. With native I/O not stubbed out, this takes the size of an LTO executable from 675K to 400K.
This function is a tiny wrapper that LLVM doesn't want to inline, and it ends up
causing more bloat than necessary. The bloat is pretty small, but it's a win of
at least 7k for small executables, and I imagine that the number goes up as
there are more calls to fail!().
Mark it as #[experimental] for now. In theory this attribute will be read in the
future. I believe that the implementation is solid enough for general use,
although I would not be surprised if there were bugs in it still. I think that
it's at the point now where public usage of it will start to uncover hopefully
the last few remaining bugs.
Closes#12044
This "bubble up an error" macro was originally named if_ok! in order to get it
landed, but after the fact it was discovered that this name is not exactly
desirable.
The name `if_ok!` isn't immediately clear that is has much to do with error
handling, and it doesn't look fantastic in all contexts (if if_ok!(...) {}). In
general, the agreed opinion about `if_ok!` is that is came in as subpar.
The name `try!` is more invocative of error handling, it's shorter by 2 letters,
and it looks fitting in almost all circumstances. One concern about the word
`try!` is that it's too invocative of exceptions, but the belief is that this
will be overcome with documentation and examples.
Close#12037
Any macro tagged with #[macro_export] will be showed in the documentation for
that module. This also documents all the existing macros inside of std::macros.
Closes#3163
cc #5605Closes#9954
If you were writing to something along the lines of `self.foo` then with the new
closure rules it meant that you were borrowing `self` for the entirety of the
closure, meaning that you couldn't format other fields of `self` at the same
time as writing to a buffer contained in `self`.
By lifting the borrow outside of the closure the borrow checker can better
understand that you're only borrowing one of the fields at a time. This had to
use type ascription as well in order to preserve trait object coercions.
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
* All I/O now returns IoResult<T> = Result<T, IoError>
* All formatting traits now return fmt::Result = IoResult<()>
* The if_ok!() macro was added to libstd
This ends up saving a single `call` instruction in the optimised code,
but saves a few hundred lines of non-optimised IR for `fn main() {
fail!("foo {}", "bar"); }` (comparing against the minimal generic
baseline from the parent commit).
They all have to go into a single module at the moment unfortunately.
Ideally, the logging macros would live in std::logging, condition! would
live in std::condition, format! in std::fmt, etc. However, this
introduces cyclic dependencies between those modules and the macros they
use which the current expansion system can't deal with. We may be able
to get around this by changing the expansion phase to a two-pass system
but that's for a later PR.
Closes#2247
cc #11763
This implements a number of the baby steps needed to start eliminating everything inside of `std::io`. It turns out that there are a *lot* of users of that module, so I'm going to try to tackle them separately instead of bringing down the whole system all at once.
This pull implements a large amount of unimplemented functionality inside of `std::rt::io` including:
* Native file I/O (file descriptors, *FILE)
* Native stdio (through the native file descriptors)
* Native processes (extracted from `std::run`)
I also found that there are a number of users of `std::io` which desire to read an input line-by-line, so I added an implementation of `read_until` and `read_line` to `BufferedReader`.
With all of these changes in place, I started to axe various usages of `std::io`. There's a lot of one-off uses here-and-there, but the major use-case remaining that doesn't have a fantastic solution is `extra::json`. I ran into a few compiler bugs when attempting to remove that, so I figured I'd come back to it later instead.
There is one fairly major change in this pull, and it's moving from native stdio to uv stdio via `print` and `println`. Unfortunately logging still goes through native I/O (via `dumb_println`). This is going to need some thinking, because I still want the goal of logging/printing to be 0 allocations, and this is not possible if `io::stdio::stderr()` is called on each log message. Instead I think that this may need to be cached as the `logger` field inside the `Task` struct, but that will require a little more workings to get right (this is also a similar problem for print/println, do we cache `stdout()` to not have to re-create it every time?).
This changes an `assert_once_ever!` assertion to just a plain old assertion
around an atomic boolean to ensure that one particular runtime doesn't attempt
to exit twice.
Closes#9739
This large commit implements and `html` output option for rustdoc_ng. The
executable has been altered to be invoked as "rustdoc_ng html <crate>" and
it will dump everything into the local "doc" directory. JSON can still be
generated by changing 'html' to 'json'.
This also fixes a number of bugs in rustdoc_ng relating to comment stripping,
along with some other various issues that I found along the way.
The `make doc` command has been altered to generate the new documentation into
the `doc/ng/$(CRATE)` directories.
Naturally, and sadly, turning off sanity checks in the runtime is
a noticable performance win. The particular test I'm running goes from
~1.5 s to ~1.3s.
Sanity checks are turned *on* when not optimizing, or when cfg
includes `rtdebug` or `rtassert`.
It now actually does logging, and is compiled out when `--cfg rtdebug` is not
given to the libstd build, which it isn't by default. This makes the rt
benchmarks 18-50% faster.
old design the TLS held the scheduler struct, and the scheduler struct
held the active task. This posed all sorts of weird problems due to
how we wanted to use the contents of TLS. The cleaner approach is to
leave the active task in TLS and have the task hold the scheduler. To
make this work out the scheduler has to run inside a regular task, and
then once that is the case the context switching code is massively
simplified, as instead of three possible paths there is only one. The
logical flow is also easier to follow, as the scheduler struct acts
somewhat like a "token" indicating what is active.
These changes also necessitated changing a large number of runtime
tests, and rewriting most of the runtime testing helpers.
Polish level is "low", as I will very soon start on more scheduler
changes that will require wiping the polish off. That being said there
should be sufficient comments around anything complex to make this
entirely respectable as a standalone commit.