And make all hand-written `IntoDiagnostic` impls generic, by using
`DiagnosticBuilder::new(dcx, level, ...)` instead of e.g.
`dcx.struct_err(...)`.
This means the `create_*` functions are the source of the error level.
This change will let us remove `struct_diagnostic`.
Note: `#[rustc_lint_diagnostics]` is added to `DiagnosticBuilder::new`,
it's necessary to pass diagnostics tests now that it's used in
`into_diagnostic` functions.
Add emulated TLS support
This is a reopen of https://github.com/rust-lang/rust/pull/96317 . many android devices still only use 128 pthread keys, so using emutls can be helpful.
Currently LLVM uses emutls by default for some targets (such as android, openbsd), but rust does not use it, because `has_thread_local` is false.
This commit has some changes to allow users to enable emutls:
1. add `-Zhas-thread-local` flag to specify that std uses `#[thread_local]` instead of pthread key.
2. when using emutls, decorate symbol names to find thread local symbol correctly.
3. change `-Zforce-emulated-tls` to `-Ztls-model=emulated` to explicitly specify whether to generate emutls.
r? `@Amanieu`
compile-time evaluation: detect writes through immutable pointers
This has two motivations:
- it unblocks https://github.com/rust-lang/rust/pull/116745 (and therefore takes a big step towards `const_mut_refs` stabilization), because we can now detect if the memory that we find in `const` can be interned as "immutable"
- it would detect the UB that was uncovered in https://github.com/rust-lang/rust/pull/117905, which was caused by accidental stabilization of `copy` functions in `const` that can only be called with UB
When UB is detected, we emit a future-compat warn-by-default lint. This is not a breaking change, so completely in line with [the const-UB RFC](https://rust-lang.github.io/rfcs/3016-const-ub.html), meaning we don't need t-lang FCP here. I made the lint immediately show up for dependencies since it is nearly impossible to even trigger this lint without `const_mut_refs` -- the accidentally stabilized `copy` functions are the only way this can happen, so the crates that popped up in #117905 are the only causes of such UB (in the code that crater covers), and the three cases of UB that we know about have all been fixed in their respective crates already.
The way this is implemented is by making use of the fact that our interpreter is already generic over the notion of provenance. For CTFE we now use the new `CtfeProvenance` type which is conceptually an `AllocId` plus a boolean `immutable` flag (but packed for a more efficient representation). This means we can mark a pointer as immutable when it is created as a shared reference. The flag will be propagated to all pointers derived from this one. We can then check the immutable flag on each write to reject writes through immutable pointers.
I just hope perf works out.
Currently LLVM uses emutls by default
for some targets (such as android, openbsd),
but rust does not use it, because `has_thread_local` is false.
This commit has some changes to allow users to enable emutls:
1. add `-Zhas-thread-local` flag to specify
that std uses `#[thread_local]` instead of pthread key.
2. when using emutls, decorate symbol names
to find thread local symbol correctly.
3. change `-Zforce-emulated-tls` to `-Ztls-model=emulated`
to explicitly specify whether to generate emutls.
If we're running against a patched libgccjit, use an algorithm similar
to what LLVM uses for this intrinsic. Otherwise, fallback to a
per-element bitreverse.
Signed-off-by: Andy Sadler <andrewsadler122@gmail.com>
The simd intrinsic handler was delegating implementation of `simd_frem`
to `Builder::frem`, which wasn't able to handle vector-typed inputs. To
fix this, teach this method how to handle vector inputs.
Signed-off-by: Andy Sadler <andrewsadler122@gmail.com>
Implements lane-local byte swapping through vector shuffles. While this
is more setup than non-vector shuffles, this implementation can shuffle
multiple integers concurrently.
Signed-off-by: Andy Sadler <andrewsadler122@gmail.com>
Currently we always do this:
```
use rustc_fluent_macro::fluent_messages;
...
fluent_messages! { "./example.ftl" }
```
But there is no need, we can just do this everywhere:
```
rustc_fluent_macro::fluent_messages! { "./example.ftl" }
```
which is shorter.
The `fluent_messages!` macro produces uses of
`crate::{D,Subd}iagnosticMessage`, which means that every crate using
the macro must have this import:
```
use rustc_errors::{DiagnosticMessage, SubdiagnosticMessage};
```
This commit changes the macro to instead use
`rustc_errors::{D,Subd}iagnosticMessage`, which avoids the need for the
imports.
Don't fall back on breaking apart the popcount operation if 128-bit
integers are natively supported.
Signed-off-by: Andy Sadler <andrewsadler122@gmail.com>
In the current implementation, the gcc backend of rustc currently emits the
following for a function that implements popcount for a u32 (x86_64 targeting
AVX2, using standard unix calling convention):
popcount:
mov eax, edi
and edi, 1431655765
shr eax
and eax, 1431655765
add edi, eax
mov edx, edi
and edi, 858993459
shr edx, 2
and edx, 858993459
add edx, edi
mov eax, edx
and edx, 252645135
shr eax, 4
and eax, 252645135
add eax, edx
mov edx, eax
and eax, 16711935
shr edx, 8
and edx, 16711935
add edx, eax
movzx eax, dx
shr edx, 16
add eax, edx
ret
Rather than using this implementation, gcc could be told to use Wenger's
algorithm. This would give the same function the following implementation:
popcount:
xor eax, eax
xor edx, edx
popcnt eax, edi
test edi, edi
cmove eax, edx
ret
This patch implements the popcount operation in terms of Wenger's algorithm in
all cases.
Signed-off-by: Andy Sadler <andrewsadler122@gmail.com>
c6e6ecb1afea9695a42d0f148ce153536b279eb5 added it to some of the
compiler's crates, but avoided adding it to all of them to reduce
bit-rot. This commit adds to more.