Otherwise, run-pass/deriving-primitive.rs breaks on 32-bit platforms,
because `int::min_value` is `0xffffffff7fffffff` when evaluated for the
discriminant declaration.
Not only can discriminants be smaller than int now, but they can be
larger than int on 32-bit targets. This has obvious implications for the
reflection interface. Without this change, things fail with LLVM
assertions when we try to "extend" i64 to i32.
This drops more of the old C++ runtime to rather be written in rust. A few
features were lost along the way, but hopefully not too many. The main loss is
that there are no longer backtraces associated with allocations (rust doesn't
have a way of acquiring those just yet). Other than that though, I believe that
the rest of the debugging utilities made their way over into rust.
Closes#8704
Some code cleanup, sorting of import blocks
Removed std::unstable::UnsafeArc's use of Either
Added run-fail tests for the new FailWithCause impls
Changed future_result and try to return Result<(), ~Any>.
- Internally, there is an enum of possible fail messages passend around.
- In case of linked failure or a string message, the ~Any gets
lazyly allocated in future_results recv method.
- For that, future result now returns a wrapper around a Port.
- Moved and renamed task::TaskResult into rt::task::UnwindResult
and made it an internal enum.
- Introduced a replacement typedef `type TaskResult = Result<(), ~Any>`.
Remove the Sha1, Sha2, MD5, and MD4 algorithms. SipHash is also cryptographically secure hash function and IsaacRng is a cryptographically secure RNG - I left those alone but removed comments that implied they were suitable for cryptographic use. I thought that MD4 was used for something by the compiler, but everything still seems to work with it removed, so, I guess not.
One thing that I'm not sure about - workcache.rs and workcache_support.rs (in librustpkg) both depend on Sha1. Without Sha1, the only hash function left is SipHash, so I switched that code over to use SipHash. The output size of SipHash is only 64-bits, however - much less than 160 for Sha1. I'm not sure this is a problem. Without other cryptographic hashes in the tree, I'm not sure what else to do. I considered moved Sha1 into librustpkg, but I don't know if that makes sense.
If merged, this closes#9300.
I'm not entirely sure why this is happening, but the server task is never seeing
the second send of the client task, and this test will very reliably fail to
complete on windows.
It was pretty much a miracle that these tests were ever passing. They would
never have passed in the single threaded case because only one sigint in the
tests is ever generated, but when run in parallel two sigints will be generated.
This drops more of the old C++ runtime to rather be written in rust. A few
features were lost along the way, but hopefully not too many. The main loss is
that there are no longer backtraces associated with allocations (rust doesn't
have a way of acquiring those just yet). Other than that though, I believe that
the rest of the debugging utilities made their way over into rust.
Closes#8704
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------|---------------|--------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------------------------------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
It's not guaranteed that there will always be an event loop to run, and this
implementation will serve as an incredibly basic one which does not provide any
I/O, but allows the scheduler to still run.
cc #9128
This is a peculiar function to require event loops to implement, and it's only
used in one spot during tests right now. Instead, a possibly more robust apis
for timers should be used rather than requiring all event loops to implement a
curious-looking function.
The PausibleIdleCallback must have some handle into the event loop, and because
struct destructors are run in order of top-to-bottom in order of fields, this
meant that the event loop was getting destroyed before the idle callback was
getting destroyed.
I can't confirm that this fixes a problem in how we use libuv, but it does
semantically fix a problem for usage with other event loops.
This adds constructors to pipe streams in the new runtime to take ownership of
file descriptors, and also fixes a few tests relating to the std::run changes
(new errors are raised on io_error and one test is xfail'd).