glibc >= 2.15 has a __pthread_get_minstack() function that returns
PTHREAD_STACK_MIN plus however many bytes are needed for thread-local
storage. Use it when it's available because just PTHREAD_STACK_MIN is
not enough in applications that have big thread-local storage
requirements.
Fixes#6233.
Enforce that the stack size is > RED_ZONE + PTHREAD_STACK_MIN. If the
call to pthread_attr_setstacksize() subsequently fails with EINVAL, it
means that the platform requires the stack size to be a multiple of the
page size. In that case, round up to the nearest page and retry.
Fixes#11694.
Represents the minimum size of a thread's stack. As such, it's both
platform and architecture-specific.
I put it under posix01 even though it predates POSIX.1-2001 by some
years. I believe it was first formalized in SUSv2. I doubt anyone
cares, though.
This test is designed to ensure that running a non-existent executable
results in a correct error message (FileNotFound in this case of this
test). However, if you try to run an executable that doesn't exist, and
that requires searching through the $PATH, and one of the $PATH components
is not readable, then a PermissionDenied error will be returned, instead
of FileNotFound.
Using an absolute path skips the $PATH search logic in exec, thus by-passing the logic in exec that would have returned a PermissionDenied
In the specific case of my machine, /usr/bin/games was part of $PATH, but my user account wasn't in the games group (thus being unable to read /usr/bin/games)
See the man pages for execv and execve for more details.
I've tested this on Linux and OSX, and I am fairly certain that there will be no problems on Windows
`Times::times` was always a second-class loop because it did not support the `break` and `continue` operations. Its playful appeal (which I liked) was then lost after `do` was disabled for closures. It's time to let this one go.
`Times::times` was always a second-class loop because it did not support the `break` and `continue` operations. Its playful appeal was then lost after `do` was disabled for closures. It's time to let this one go.
I found awkward to have `MutableCloneableVector` and `CloneableIterator` on the one hand, and `CopyableVector` etc. on the other hand.
The concerned traits are:
* `CopyableVector` --> `CloneableVector`
* `OwnedCopyableVector` --> `OwnedCloneableVector`
* `ImmutableCopyableVector` --> `ImmutableCloneableVector`
* `CopyableTuple` --> `CloneableTuple`
The general consensus is that we want to move away from conditions for I/O, and I propose a two-step plan for doing so:
1. Warn about unused `Result` types. When all of I/O returns `Result`, it will require you inspect the return value for an error *only if* you have a result you want to look at. By default, for things like `write` returning `Result<(), Error>`, these will all go silently ignored. This lint will prevent blind ignorance of these return values, letting you know that there's something you should do about them.
2. Implement a `try!` macro:
```
macro_rules! try( ($e:expr) => (match $e { Ok(e) => e, Err(e) => return Err(e) }) )
```
With these two tools combined, I feel that we get almost all the benefits of conditions. The first step (the lint) is a sanity check that you're not ignoring return values at callsites. The second step is to provide a convenience method of returning early out of a sequence of computations. After thinking about this for awhile, I don't think that we need the so-called "do-notation" in the compiler itself because I think it's just *too* specialized. Additionally, the `try!` macro is super lightweight, easy to understand, and works almost everywhere. As soon as you want to do something more fancy, my answer is "use match".
Basically, with these two tools in action, I would be comfortable removing conditions. What do others think about this strategy?
----
This PR specifically implements the `unused_result` lint. I actually added two lints, `unused_result` and `unused_must_use`, and the first commit has the rationale for why `unused_result` is turned off by default.
In two ways:
- for a plain `fail!(a)` we make the generic part of `begin_unwind` as small as possible (makes `fn main() { fail!() }` compile 2-3x faster, due to less monomorphisation bloat)
- for `fail!("format {}", "string")`, we avoid touching the generics completely by doing the formatting in a specialised function, which (with optimisations) saves a function call at the call-site of `fail!`. (This one has significantly less benefit than the first.)
This ends up saving a single `call` instruction in the optimised code,
but saves a few hundred lines of non-optimised IR for `fn main() {
fail!("foo {}", "bar"); }` (comparing against the minimal generic
baseline from the parent commit).
This splits the vast majority of the code path taken by
`fail!()` (`begin_unwind`) into a separate non-generic inline(never)
function, so that uses of `fail!()` only monomorphise a small amount of
code, reducing code bloat and making very small crates compile faster.
These are either returned from public functions, and really should
appear in the documentation, but don't since they're private, or are
implementation details that are currently public.
These are either returned from public functions, and really should
appear in the documentation, but don't since they're private, or are
implementation details that are currently public.
The following are renamed:
* `min_value` => `MIN`
* `max_value` => `MAX`
* `bits` => `BITS`
* `bytes` => `BYTES`
All tests pass, except for `run-pass/phase-syntax-link-does-resolve.rs`. I doubt that failure is related, though.
Fixes#10010.