This breaks a fair amount of code. The typical patterns are:
* `for _ in range(0, 10)`: change to `for _ in range(0u, 10)`;
* `println!("{}", 3)`: change to `println!("{}", 3i)`;
* `[1, 2, 3].len()`: change to `[1i, 2, 3].len()`.
RFC #30. Closes#6023.
[breaking-change]
The existing APIs for spawning processes took strings for the command
and arguments, but the underlying system may not impose utf8 encoding,
so this is overly limiting.
The assumption we actually want to make is just that the command and
arguments are viewable as [u8] slices with no interior NULLs, i.e., as
CStrings. The ToCStr trait is a handy bound for types that meet this
requirement (such as &str and Path).
However, since the commands and arguments are often a mixture of
strings and paths, it would be inconvenient to take a slice with a
single T: ToCStr bound. So this patch revamps the process creation API
to instead use a builder-style interface, called `Command`, allowing
arguments to be added one at a time with differing ToCStr
implementations for each.
The initial cut of the builder API has some drawbacks that can be
addressed once issue #13851 (libstd as a facade) is closed. These are
detailed as FIXMEs.
Closes#11650.
[breaking-change]
This commit moves all logging out of the standard library into an external
crate. This crate is the new crate which is responsible for all logging macros
and logging implementation. A few reasons for this change are:
* The crate map has always been a bit of a code smell among rust programs. It
has difficulty being loaded on almost all platforms, and it's used almost
exclusively for logging and only logging. Removing the crate map is one of the
end goals of this movement.
* The compiler has a fair bit of special support for logging. It has the
__log_level() expression as well as generating a global word per module
specifying the log level. This is unfairly favoring the built-in logging
system, and is much better done purely in libraries instead of the compiler
itself.
* Initialization of logging is much easier to do if there is no reliance on a
magical crate map being available to set module log levels.
* If the logging library can be written outside of the standard library, there's
no reason that it shouldn't be. It's likely that we're not going to build the
highest quality logging library of all time, so third-party libraries should
be able to provide just as high-quality logging systems as the default one
provided in the rust distribution.
With a migration such as this, the change does not come for free. There are some
subtle changes in the behavior of liblog vs the previous logging macros:
* The core change of this migration is that there is no longer a physical
log-level per module. This concept is still emulated (it is quite useful), but
there is now only a global log level, not a local one. This global log level
is a reflection of the maximum of all log levels specified. The previously
generated logging code looked like:
if specified_level <= __module_log_level() {
println!(...)
}
The newly generated code looks like:
if specified_level <= ::log::LOG_LEVEL {
if ::log::module_enabled(module_path!()) {
println!(...)
}
}
Notably, the first layer of checking is still intended to be "super fast" in
that it's just a load of a global word and a compare. The second layer of
checking is executed to determine if the current module does indeed have
logging turned on.
This means that if any module has a debug log level turned on, all modules
with debug log levels get a little bit slower (they all do more expensive
dynamic checks to determine if they're turned on or not).
Semantically, this migration brings no change in this respect, but
runtime-wise, this will have a perf impact on some code.
* A `RUST_LOG=::help` directive will no longer print out a list of all modules
that can be logged. This is because the crate map will no longer specify the
log levels of all modules, so the list of modules is not known. Additionally,
warnings can no longer be provided if a malformed logging directive was
supplied.
The new "hello world" for logging looks like:
#[phase(syntax, link)]
extern crate log;
fn main() {
debug!("Hello, world!");
}
The std::run module is a relic from a standard library long since past, and
there's not much use to having two modules to execute processes with where one
is slightly more convenient. This commit merges the two modules, moving lots of
functionality from std::run into std::io::process and then deleting
std::run.
New things you can find in std::io::process are:
* Process::new() now only takes prog/args
* Process::configure() takes a ProcessConfig
* Process::status() is the same as run::process_status
* Process::output() is the same as run::process_output
* I/O for spawned tasks is now defaulted to captured in pipes instead of ignored
* Process::kill() was added (plus an associated green/native implementation)
* Process::wait_with_output() is the same as the old finish_with_output()
* destroy() is now signal_exit()
* force_destroy() is now signal_kill()
Closes#2625Closes#10016
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626