This PR does a bunch of cleaning up of various APIs. The major one is that it merges `Iterator` and `IteratorUtil`, and renames functions like `transform` into `map`. I also merged `DoubleEndedIterator` and `DoubleEndedIteratorUtil`, as well as I renamed various .consume* functions to .move_iter(). This helps to implement part of #7887.
I'm a bit disappointed that I couldn't figure out how to factor out more of the code implementing `extra::sync` but I feel this is an okay start. Also I added some documentation explaining that `WaitQueue` isn't thread safe, and needs an exclusive lock.
@bblum
When there is only a single store to the ret slot that dominates the
load that gets the value for the "ret" instruction, we can elide the
ret slot and directly return the operand of the dominating store
instruction. This is the same thing that clang does, except for a
special case that doesn't seem to affect us.
Fixes#8238
When there is only a single store to the ret slot that dominates the
load that gets the value for the "ret" instruction, we can elide the
ret slot and directly return the operand of the dominating store
instruction. This is the same thing that clang does, except for a
special case that doesn't seem to affect us.
Fixes#8238
This PR fixes#7235 and #3371, which removes trailing nulls from `str` types. Instead, it replaces the creation of c strings with a new type, `std::c_str::CString`, which wraps a malloced byte array, and respects:
* No interior nulls
* Ends with a trailing null
r? @graydon Also, notably, make rustpkgtest depend on the rustpkg executable (otherwise, tests that shell out to rustpgk might run when rustpkg doesn't exist).
Get rid of special cases for names beginning with "rust-" or
containing hyphens, and just store a Path in a package ID. The Rust-identifier
for the crate is none of rustpkg's business.
This commit allows you to write:
extern mod x = "a/b/c";
which means rustc will search in the RUST_PATH for a package with
ID a/b/c, and bind it to the name `x` if it's found.
Incidentally, move get_relative_to from back::rpath into std::path
Mostly optimizing TLS accesses to bring local heap allocation performance
closer to that of oldsched. It's not completely at parity but removing the
branches involved in supporting oldsched and optimizing pthread_get/setspecific
to instead use our dedicated TCB slot will probably make up for it.
env! aborts compilation of the specified environment variable is not
defined and takes an optional second argument containing a custom
error message. option_env! creates an Option<&'static str> containing
the value of the environment variable.
There are no run-pass tests that check the behavior when the environment
variable is defined since the test framework doesn't support setting
environment variables at compile time as opposed to runtime. However,
both env! and option_env! are used inside of rustc itself, which should
act as a sufficient test.
Fixes#2248.
When running rusti 32-bit tests from a 64-bit host, these errors came up frequently. My best idea as to what was happening is:
1. First, if you hash the same `int` value on 32-bit and 64-bit, you will get two different hashes.
2. In a cross-compile situation, let's say x86_64 is building an i686 library, all of the hashes will be 64-bit hashes.
3. Then let's say you use the i686 libraries and then attempt to link against the same i686 libraries, because you're calculating hashes with a 32-bit int instead of a 64-bit one, you'll have different hashes and you won't be able to find items in the metadata (the items were generated with a 64-bit int).
This patch changes the items to always be hashed as an `i64` to preserve the hash value across architectures. Here's a nice before/after for this patch of the state of rusti tests
```
host target before after
64 64 yes yes
64 32 no no (llvm assertion)
32 64 no yes
32 32 no no (llvm assertion)
```
Basically one case started working, but currently when the target is 32-bit LLVM is having a lot of problems generating code. That's another separate issue though.
Mostly optimizing TLS accesses to bring local heap allocation performance
closer to that of oldsched. It's not completely at parity but removing the
branches involved in supporting oldsched and optimizing pthread_get/setspecific
to instead use our dedicated TCB slot will probably make up for it.
Basically, generic containers should not use the default methods since a
type of elements may not guarantees total order. str could use them
since u8's Ord guarantees total order. Floating point numbers are also
broken with the default methods because of NaN. Thanks for @thestinger.
Timespec also guarantees total order AIUI. I'm unsure whether
extra::semver::Identifier does so I left it alone. Proof needed.
Signed-off-by: OGINO Masanori <masanori.ogino@gmail.com>
Code that collects fields in struct-like patterns used to ignore
wildcard patterns like `Foo{_}`. But `enter_defaults` considered
struct-like patterns as default in order to overcome this
(accoring to my understanding of situation).
However such behaviour caused code like this:
```
enum E {
Foo{f: int},
Bar
}
let e = Bar;
match e {
Foo{f: _f} => { /* do something (1) */ }
_ => { /* do something (2) */ }
}
```
consider pattern `Foo{f: _f}` as default. That caused inproper behaviour
and even segfaults while trying to destruct `Bar` as `Foo{f: _f}`.
Issues: #5625 , #5530.
This patch fixes `collect_record_or_struct_fields` to split cases of
single wildcard struct-like pattern and no struct-like pattern at all.
Former case resolved with `enter_rec_or_struct` (and not with
`enter_defaults`).
Closes#5625.
Closes#5530.
FromStr implemented from scratch.
It is overengineered a bit, however.
Old implementation handles errors by fail!()-ing. And it has bugs, like it accepts `127.0.0.1::127.0.0.1` as IPv6 address, and does not handle all ipv4-in-ipv6 schemes. So I decided to implement parser from scratch.
This pull request converts the scheduler from a naive shared queue scheduler to a naive workstealing scheduler. The deque is still a queue inside a lock, but there is still a substantial performance gain. Fiddling with the messaging benchmark I got a ~10x speedup and observed massively reduced memory usage.
There are still *many* locations for optimization, but based on my experience so far it is a clear performance win as it is now.
This is a fairly large rollup, but I've tested everything locally, and none of
it should be platform-specific.
r=alexcrichton (bdfdbdd)
r=brson (d803c18)
r=alexcrichton (a5041d0)
r=bstrie (317412a)
r=alexcrichton (135c85e)
r=thestinger (8805baa)
r=pcwalton (0661178)
r=cmr (9397fe0)
r=cmr (caa4135)
r=cmr (6a21d93)
r=cmr (4dc3379)
r=cmr (0aa5154)
r=cmr (18be261)
r=thestinger (f10be03)