This PR improves the single-stepping experience for if-expression (no more jumping into the *else* branch before entering the *then* branch, no more jumping to the end of the *else* branch after finishing the *then* branch). Unfortunately I don't know of a straight-forward way of writing automated tests for this. Suggestions welcome!
Changes:
* default value when no args
* license
* removed libc printing
* use extra::bigint instead of handmade gmp binding
The drawback is that it's 2 order of magnitude slower, the good news is that there is a good bench for extra::bigint.
If a function is marked as external, then it's likely desired for use with some
native library, so we're not really accomplishing a whole lot by internalizing
all of these symbols.
If a function is marked as external, then it's likely desired for use with some
native library, so we're not really accomplishing a whole lot by internalizing
all of these symbols.
Provide `Closed01` and `Open01` that generate directly from the
closed/open intervals from 0 to 1, in contrast to the plain impls for
f32 and f64 which generate the half-open [0,1).
Fixes#7755.
This commit fixes issue #10468.
It propagate optimization level from PkgSrc to compile_crate as a rustc::driver::session::OptLevel enum.
The 'opt' argument to compile_crate was previously hardcoded as boolean false, such that calls to session::options would always result in the session::No flag being used.
* moved `extern` inside module
* changed `extern "stdcall"` to `extern "system"`
* changed `cfg(target_os="win32")` to `cfg(windows)`
* only run on Windows && x86, (not x86_64)
* updated copyright dates
There was a syntax error because the `extern "stdcall"` was outside the module instead of inside it.
* moved `extern` inside module
* change `extern "stdcall"` to `extern "system"`
* change `cfg(target_os="win32")` to `cfg(windows)`
* updated copyright dates
* changed log(error, ...) => info!(....)
* added `pub` keyword to kernel32 functions
Changes:
* add licence;
* remove usage of libc and unsafe;
* use BufferedWriter to improve performance;
* use a DummyWriter to cancel binary output in test.
If any of the digits was one past the maximum (e.g. 10**9 for base 10),
then this wasn't detected correctly and so the length of the digit was
one more than expected, causing a very large allocation.
Fixes#10522.
Fixes#10288.
Changes:
* add licence;
* remove usage of libc and unsafe;
* use BufferedWriter to improve performance;
* use a DummyWriter to cancel binary output in test.
These commits create a `Buffer` trait in the `io` module which represents an I/O reader which is internally buffered. This abstraction is used to reasonably implement `read_line` and `read_until` along with at least an ok implementation of `read_char` (although I certainly haven't benchmarked `read_char`).
If any of the digits was one past the maximum (e.g. 10**9 for base 10),
then this wasn't detected correctly and so the length of the digit was
one more than expected, causing a very large allocation.
Fixes#10522.
Fixes#10288.
This implementation of the meteor contest implements:
- insertion check with bit trick;
- pregenetation of every feasible placement of the pieces on the
board;
- filtering of placement that implies unfeasible board
- central symetry breaking
related to #2776
I've started working on this issue and pushed a small commit, which adds a range check for integer literals in `middle::const_eval` (no `uint` at the moment)
At the moment, this patch is just a proof of concept, I'm not sure if there is a better function for the checks in `middle::const_eval`. This patch does not check for overflows after constant folding, eg:
let x: i8 = 99 + 99;
Bare functions are another example of a scalar but non-numeric
type (like char) that should be handled separately in casts.
This disallows expressions like `0 as extern "Rust" fn() -> int;`.
It might be advantageous to allow casts between bare functions
and raw pointers in unsafe code in the future, to pass function
pointers between Rust and C.
Closes#8728
This implementation of the meteor contest implements:
- insertion check with bit trick;
- pregenetation of every feasible placement of the pieces on the
board;
- filtering of placement that implies unfeasible board
- central symetry breaking
Rename {struct-update,fsu}-moves-and-copies, since win32
failed to run the test since UAC prevents any executable whose
name contaning "update". (#10452)
Some tests related to #9205 are expected to fail on gcc 4.8,
so they are marked as `xfail-win32` instead of `xfail-fast`.
Some tests using `extra::tempfile` fail on win32 due to #10462.
Mark them as `xfail-win32`.
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)
This commit re-organizes the io::native module slightly in order to have a
working implementation of rtio::IoFactory which uses native implementations. The
goal is to seamlessly multiplex among libuv/native implementations wherever
necessary.
Right now most of the native I/O is unimplemented, but we have existing bindings
for file descriptors and processes which have been hooked up. What this means is
that you can now invoke println!() from libstd with no local task, no local
scheduler, and even without libuv.
There's still plenty of work to do on the native I/O factory, but this is the
first steps into making it an official portion of the standard library. I don't
expect anyone to reach into io::native directly, but rather only std::io
primitives will be used. Each std::io interface seamlessly falls back onto the
native I/O implementation if the local scheduler doesn't have a libuv one
(hurray trait ojects!)
This test was failing periodically on windows and other platforms, and in
debugging the issue locally I've found that the previous test was failing
at the assertion `ns0 <= ns1`. Upon inspecting the values, the two numbers were
very close to one another, but off by a little bit.
I believe that this is because `precise_time_s` goes from `u64` -> `f64` and
then we go again back to `u64` for the assertion. This conversion is a lossy one
that's not always guaranteed to succeed, so instead I've changed the test to
only compare against u64 instances.
I implemented BufWriter. I realize the use of conditions are on their way out for IO, but it does raise a condition if a write will not fit in the buffer for now.
I also replaced the seek code for MemWriter. It was adding the offset as a uint, which is unsound for negative offsets. It only happened to work because unsigned addition performs the same operation with two's complement, and sizeof(uint) <= sizeof(i64) so there was no (lack of) sign extension. I replaced this with computing an offset as an i64 and clamping to zero. I don't expect anyone will have use BufWriter with a byte buffer greater than 2^63 bytes any time soon.
@alexcrichton
Closes#10433
This trait is meant to abstract whether a reader is actually implemented with an
underlying buffer. For all readers which are implemented as such, we can
efficiently implement things like read_char, read_line, read_until, etc. There
are two required methods for managing the internal buffer, and otherwise
read_line and friends can all become default methods.
Closes#10334
Filled in the implementations of Writer and Seek for BufWriter. It
raises the io_error condition if a write cannot fit in the buffer.
The Seek implementation for MemWriter, which was incorrectly using
unsigned arithmatic to add signed offsets, has also been replaced.
Now the privacy pass returns enough information that other passes do not need to duplicate the visibility rules, and the missing_doc implementation is more consistent with other lint checks.
Previously, the `exported_items` set created by the privacy pass was
incomplete. Specifically, it did not include items that had been defined
at a private path but then `pub use`d at a public path. This commit
finds all crate exports during the privacy pass. Consequently, some code
in the reachable pass and in rustdoc is no longer necessary. This commit
then removes the separate `MissingDocLintVisitor` lint pass, opting to
check missing_doc lint in the same pass as the other lint checkers using
the visibility result computed by the privacy pass.
Fixes#9777.
This test was failing periodically on windows and other platforms, and in
debugging the issue locally I've found that the previous test was failing
at the assertion `ns0 <= ns1`. Upon inspecting the values, the two numbers were
very close to one another, but off by a little bit.
I believe that this is because `precise_time_s` goes from `u64` -> `f64` and
then we go again back to `u64` for the assertion. This conversion is a lossy one
that's not always guaranteed to succeed, so instead I've changed the test to
only compare against u64 instances.
This fills in some missing docs in the nums package. Let me know if this is on the right track for what's wanted for docs. I can probably fill in more in the future. Thanks.
(As a side note the precedence of the unary negative operator '-' tripped me up for a bit. Essentially I would expect `-25.0f32.sqrt()` to result in NaN instead of `-5.0`.)
I was benchmarking rust-http recently, and I saw that 50% of its time was spent
creating buffered readers/writers. Albeit rust-http wasn't using
std::rt::io::buffered, but the same idea applies here. It's much cheaper to
malloc a large region and not initialize it than to set it all to 0. Buffered
readers/writers never use uninitialized data, and their internal buffers are
encapsulated, so any usage of uninitialized slots are an implementation bug in
the readers/writers.
The UvProcess exit callback is called with a zero exit status and non-zero termination signal when a child is terminated by a signal.
If a parent checks only the exit status (for example, only checks the return value from `wait()`), it may believe the process completed successfully when it actually failed.
Helpers for common use-cases are in `std::rt::io::process`.
Should resolve https://github.com/mozilla/rust/issues/10062.
As we start to move runtime components into the crate map, it's becoming harder
and harder to start the runtime from a C function as rust is embedded in another
application. Right now if you compile a rust crate as a dynamic library which is
then linked to another application, when using std::rt::start there are no I/O
local services, even though rustuv was linked against and requested. The reason
for this is that there is no top level crate map available specifying where to
find libuv I/O.
This option is not meant to be used regularly, but rather whenever compiling a
final library crate and linking it into another application. This lifts the
requirement that to get a crate map you must have the final destination be an
executable.
Since the removal of privacy from resolve, this flag is no longer necessary to
get the test runner working. All of the privacy checks are bypassed by a special
item attribute in the privacy visitor.
Closes#4947
precise_time_ns
The QueryPerformance* functions take a LARGE_INTEGER, which is a signed
64-bit integer rather than an unsigned 64-bit integer. `ts.tv_sec`, too,
is a signed integer so `ns_per_s` has been changed to a int64_t.
The commit messages have more details, but this removes all analysis and usage related to fixed_stack_segment and rust_stack attributes. It's now the assumption that we always have "enough stack" and we'll implement detection of stack overflow through other means.
The stack overflow detection is currently implemented for rust functions, but it is unimplemented for C functions (we still don't have guard pages).
I increased this to 4MB when I implemented abort-on-stack-overflow for Rust
functions. Now that the fixed_stack_segment attribute is removed, no rust
function will ever reasonably request 2MB of stack (due to calling an FFI
function).
The default size of 2MB should be plenty for everyday use-cases, and tasks can
still request more stack via the spawning API.
These two attributes are no longer useful now that Rust has decided to leave
segmented stacks behind. It is assumed that the rust task's stack is always
large enough to make an FFI call (due to the stack being very large).
There's always the case of stack overflow, however, to consider. This does not
change the behavior of stack overflow in Rust. This is still normally triggered
by the __morestack function and aborts the whole process.
C stack overflow will continue to corrupt the stack, however (as it did before
this commit as well). The future improvement of a guard page at the end of every
rust stack is still unimplemented and is intended to be the mechanism through
which we attempt to detect C stack overflow.
Closes#8822Closes#10155
This was marked xfail-test
According to rust's issue #912, this will not be fixed.
There's no point in keeping the test if it is never intended to pass.
I was benchmarking rust-http recently, and I saw that 50% of its time was spent
creating buffered readers/writers. Albeit rust-http wasn't using
std::rt::io::buffered, but the same idea applies here. It's much cheaper to
malloc a large region and not initialize it than to set it all to 0. Buffered
readers/writers never use uninitialized data, and their internal buffers are
encapsulated, so any usage of uninitialized slots are an implementation bug in
the readers/writers.
As we start to move runtime components into the crate map, it's becoming harder
and harder to start the runtime from a C function as rust is embedded in another
application. Right now if you compile a rust crate as a dynamic library which is
then linked to another application, when using std::rt::start there are no I/O
local services, even though rustuv was linked against and requested. The reason
for this is that there is no top level crate map available specifying where to
find libuv I/O.
This option is not meant to be used regularly, but rather whenever compiling a
final library crate and linking it into another application. This lifts the
requirement that to get a crate map you must have the final destination be an
executable.
The logging macros all use libuv-based I/O, and there was one stray debug
statement in task::spawn which was executing before the I/O context was ready.
Remove it and add a test to make sure that we can continue to debug this sort of
code.
Closes#10405
The logging macros all use libuv-based I/O, and there was one stray debug
statement in task::spawn which was executing before the I/O context was ready.
Remove it and add a test to make sure that we can continue to debug this sort of
code.
Closes#10405
When a channel is destroyed, it may attempt scheduler operations which could
move a task off of it's I/O scheduler. This is obviously a bad interaction, and
some finesse is required to make it work (making destructors run at the right
time).
Closes#10375
It appears that uv's support for interacting with a stdio stream as a tty when
it's actually a pipe is pretty problematic. To get around this, promote a check
to see if the stream is a tty to the top of the tty constructor, and bail out
quickly if it's not identified as a tty.
Closes#10237
In the ideal world, uv I/O could be canceled safely at any time. In reality,
however, we are unable to do this. Right now linked failure is fairly flaky as
implemented in the runtime, making it very difficult to test whether the linked
failure mechanisms inside of the uv bindings are ready for this kind of
interaction.
Right now, all constructors will execute in a task::unkillable block, and all
homing I/O operations will prevent linked failure in the duration of the homing
operation. What this means is that tasks which perform I/O are still susceptible
to linked failure, but the I/O operations themselves will never get interrupted.
Instead, the linked failure will be received at the edge of the I/O operation.
It turns out that the uv implementation would cause use-after-free if the idle
callback was used after the call to `close`, and additionally nothing would ever
really work that well if `start()` were called twice. To change this, the
`start` and `close` methods were removed in favor of specifying the callback at
creation, and allowing destruction to take care of closing the watcher.