This removes all of the explicit `@mut` fields from `CrateContext`. There are still a few that are managed, but no longer do we have `@mut bool` in the structure.
Most of the changes are changing `@CrateContext` to `@mut CrateContext`, though I did change as many as I could get away with to `&CrateContext` and `&mut CrateContext`. The biggest thing preventing me from changing to `&[mut]` in most places was the instruction counter thing. In two cases, where I got a static borrow error and a dynamic borrow error, I opted to remove the count call there as it was literally the only thing preventing me from switching to `&mut CrateContext` parameters in both cases.
Other things to note:
* the EncoderContext uses borrowed pointers with lifetimes, since it can, though that required me to work around the limitation of not being able to move a structure with borrowed pointers into a heap closure. I changed as much as I could to stack closures, but unfortunately I hit the AST visitor and changing that is somewhat outside the scope of this PR. Instead (and there is a comment to this effect) I choose to unsafely get the structure into the heap, this is because I know the lifetimes involved are safe, even though the compiler can't prove it.
* Many of the changes are workarounds because of the borrow checker, either dynamically freezing it for too long, or inferring too large a scope. This is mostly just from nested function calls where each borrow is considered to last for the entire statement. Other cases are where `CrateContext` was borrowed in a `match` causing it to be borrowed for the entire length of the match, even though that wasn't wanted (or needed).
* I haven't yet tested to see if this changes compilation times in any way. I doubt there will be much of an impact however, as the only major improvements are less indirection and fewer refcount bumps.
* This lays the foundations to remove many more heap allocations in trans as many cases can be changed to use lifetimes instead.
=====
This change includes some other, minor refactorings, as I am planning a series, however I don't want to submit them all at once as it will be hell to continually rebase.
Remove all the explicit @mut-fields from CrateContext, though many
fields are still @-ptrs.
This required changing every single function call that explicitly
took a @CrateContext, so I took advantage and changed as many as I
could get away with to &-ptrs or &mut ptrs.
Currently, when calling glue functions, we cast the function to match
the argument type. This interacts very badly with LLVM and breaks
inlining of the glue code.
It's more efficient to use a unified function type for the glue
functions and always cast the function argument instead of the function.
The resulting code for rustc is about 13% faster (measured up to and
including the "trans" pass) and the resulting librustc is about 5%
smaller.
Currently, when calling glue functions, we cast the function to match
the argument type. This interacts very badly with LLVM and breaks
inlining of the glue code.
It's more efficient to use a unified function type for the glue
functions and always cast the function argument instead of the function.
The resulting code for rustc is about 13% faster (measured up to and
including the "trans" pass) and the resulting librustc is about 5%
smaller.
r? @graydon Automate more tests described in the commands.txt file,
and add infrastructure for running them. Right now, tests shell
out to call rustpkg. This is not ideal.
Goes part of the way towards addressing #5683
This is caused by StrVector having a generic implementation for &[S]
and so #5898 means that method resolution of ~[~[1]].concat() sees that
both StrVector and VectorVector have methods that (superficially) match.
They are now connect_vec and concat_vec, which means that they can actually be
called.
r? @brson
links to issues: #7065 the race that's fixed; #7066 the perf improvement I added. There are also some minor cleanup commits here.
To measure the performance improvement from replacing the exclusive with an atomic uint, I edited the ```msgsend-ring-rw-arcs``` bench test to do a ```write_downgrade``` instead of just a ```write```, so that it stressed the code paths that accessed ```read_count```. (At first I was still using ```write``` and saw no performance difference whatsoever, whoooops.)
The bench test measures how long it takes to send 1,000,000 messages by using rwarcs to emulate pipes. I also measured the performance difference imposed by the fix to the ```access_lock``` race (which involves taking an extra semaphore in the ```cond.wait()``` path). The net result is that fixing the race imposes a 4% to 5% slowdown, but doing the atomic uint optimization gives a 6% to 8% speedup.
Note that this speedup will be most visible in read- or downgrade-heavy workloads. If an RWARC's only users are writers, the optimization doesn't matter. All the same, I think this more than justifies the extra complexity I mentioned in #7066.
The raw numbers are:
```
with xadd read count
before write_cond fix
4.18 to 4.26 us/message
with write_cond fix
4.35 to 4.39 us/message
with exclusive read count
before write_cond fix
4.41 to 4.47 us/message
with write_cond fix
4.65 to 4.76 us/message
```
Implement conditional support in terminfo, along with a few other related operators.
Fix implementation of non-commutative arithmetic operators.
Remove all known cases of task failure from `terminfo::parm::expand`, and change the method signature.
Fix some other miscellaneous issues.
The code compiles and runs under windows now, but I couldn't look up any
symbol from the current executable (dlopen(NULL)), and calling looked
up external function handles doesn't seem to work correctly under windows.
This un-reverts the reverts of the rusti commits made awhile back. These were reverted for an LLVM failure in rustpkg. I believe that this is not a problem with these commits, but rather that rustc is being used in parallel for rustpkg tests (in-process). This is not working yet (almost! see #7011), so I serialized all the tests to run one after another.
@brson, I'm mainly just guessing as to the cause of the LLVM failures in rustpkg tests. I'm confident that running tests in parallel is more likely to be the problem than those commits I made.
Additionally, this fixes two recently reported issues with rusti.
Automate more tests described in the commands.txt file,
and add infrastructure for running them. Right now, tests shell
out to call rustpkg. This is not ideal.
This is caused by StrVector having a generic implementation for &[S]
and so #5898 means that method resolution of ~[~[1]].concat() sees that
both StrVector and VectorVector have methods that (superficially) match.
They are now connect_vec and concat_vec, which means that they can actually be
called.
The lookups for these items in external crates currently cause repeated
decoding of the EBML metadata, which is pretty slow. Adding caches to
avoid the repeated decoding reduces the time required for the type
checking of librustc by about 25%.
The lookups for these items in external crates currently cause repeated
decoding of the EBML metadata, which is pretty slow. Adding caches to
avoid the repeated decoding reduces the time required for the type
checking of librustc by about 25%.
Implement the %?, %t, %e, and %; operators. Also implement the %<, %=,
%> operators, without which conditionals aren't very useful.
Fix the order of parameters for the arithmetic operators.
Implement the missing %^ operator.
Replace all potentially-failing operations with Err returns and add
tests.
Remove the Char parameter type; characters are represented as Numbers.
Fix integer constants to work properly when there are multiple constants
in the same capability string.
Tweak loop to use iterators instead of indexing into cap.