It's not guaranteed that there will always be an event loop to run, and this
implementation will serve as an incredibly basic one which does not provide any
I/O, but allows the scheduler to still run.
cc #9128
The goal here is to avoid requiring a division or multiplication to compare against the length. The bounds check previously used an incorrect micro-optimization to replace the division by a multiplication, but now neither is necessary *for slices*. Unique/managed vectors will have to do a division to get the length until they are reworked/replaced.
Add a new trait BytesContainer that is implemented for both byte vectors
and strings.
Convert Path::from_vec and ::from_str to one function, Path::new().
Remove all the _str-suffixed mutation methods (push, join, with_*,
set_*) and modify the non-suffixed versions to use BytesContainer.
Remove the old path.
Rename path2 to path.
Update all clients for the new path.
Also make some miscellaneous changes to the Path APIs to help the
adoption process.
This commit fixes all of the fallout of the previous commit which is an attempt
to refine privacy. There were a few unfortunate leaks which now must be plugged,
and the most horrible one is the current `shouldnt_be_public` module now inside
`std::rt`. I think that this either needs a slight reorganization of the
runtime, or otherwise it needs to just wait for the external users of these
modules to get replaced with their `rt` implementations.
Other fixes involve making things pub which should be pub, and otherwise
updating error messages that now reference privacy instead of referencing an
"unresolved name" (yay!).
The root issue is that dlerror isn't reentrant or even thread safe.
The solution implemented here is to make a yielding spin lock over an
AtomicFlag. This is pretty hacky, but the best we can do at this point.
As far as I can tell, it isn't possible to create a global mutex without
having to initialize it in a single threaded context.
The Windows code isn't affected since errno is thread-local on Windows
and it's running in an atomically block to ensure there isn't a green
thread context switch.
Closes#8156
The root issue is that dlerror isn't reentrant or even thread safe.
The Windows code isn't affected since errno is thread-local on Windows
and it's running in an atomically block to ensure there isn't a green
thread context switch.
Closes#8156
It is simply defined as `f64` across every platform right now.
A use case hasn't been presented for a `float` type defined as the
highest precision floating point type implemented in hardware on the
platform. Performance-wise, using the smallest precision correct for the
use case greatly saves on cache space and allows for fitting more
numbers into SSE/AVX registers.
If there was a use case, this could be implemented as simply a type
alias or a struct thanks to `#[cfg(...)]`.
Closes#6592
The mailing list thread, for reference:
https://mail.mozilla.org/pipermail/rust-dev/2013-July/004632.html
While usage of change_dir_locked is synchronized against itself, it's not
synchronized against other relative path usage, so I'm of the opinion that it
just really doesn't help in running tests. In order to prevent the problems that
have been cropping up, this completely removes the function.
All existing tests (except one) using it have been moved to run-pass tests where
they get their own process and don't need to be synchronized with anyone else.
There is one now-ignored rustpkg test because when I moved it to a run-pass test
apparently run-pass isn't set up to have 'extern mod rustc' (it ends up having
linkage failures).
This removes the stacking of type parameters that occurs when invoking
trait methods, and fixes all places in the standard library that were
relying on it. It is somewhat awkward in places; I think we'll probably
want something like the `Foo::<for T>::new()` syntax.
This means that fewer `transmute`s are required, so there is less
chance of a `transmute` not having the corresponding `forget`
(possibly leading to use-after-free, etc).
Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:
* Passes are no longer explicitly added one by one. This would be difficult to
keep up with as LLVM changes and we don't guaranteed always know the best
order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
what clang does, and I presume that we *may* see a speed boost from the
module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
analysis passes
Some new features include:
* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
disabled with `-Z no-vectorize-loops`
All of these "copies" of clang are based off their [source code](http://clang.llvm.org/doxygen/BackendUtil_8cpp_source.html) in case anyone is curious what my source is. I was hoping that this would fix#8665, but this does not help the performance issues found there. Hopefully i'll allow us to tweak passes or see what's going on to try to debug that problem.
Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:
* Passes are no longer explicitly added one by one. This would be difficult to
keep up with as LLVM changes and we don't guaranteed always know the best
order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
what clang does, and I presume that we *may* see a speed boost from the
module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
analysis passes
Some new features include:
* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
disabled with `-Z no-vectorize-loops`
Some extern blobs are duplicated without "stdcall" abi,
since Win64 does not use any calling convention.
(Giving any abi to them causes llvm producing wrong bytecode.)
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
This includes a number of improvements to `ifmt!`
* Implements formatting arguments -- `{:0.5x}` works now
* Formatting now works on all integer widths, not just `int` and `uint`
* Added a large doc block to `std::fmt` which should help explain what `ifmt!` is all about
* Added floating point formatters, although they have the same pitfalls from before (they're just proof-of-concept now)
Closed a couple of issues along the way, yay! Once this gets into a snapshot, I'll start looking into removing all of `fmt`
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
This PR fixes#7235 and #3371, which removes trailing nulls from `str` types. Instead, it replaces the creation of c strings with a new type, `std::c_str::CString`, which wraps a malloced byte array, and respects:
* No interior nulls
* Ends with a trailing null
Mostly optimizing TLS accesses to bring local heap allocation performance
closer to that of oldsched. It's not completely at parity but removing the
branches involved in supporting oldsched and optimizing pthread_get/setspecific
to instead use our dedicated TCB slot will probably make up for it.