Add x86_64-unknown-linux-ohos target
This complements the existing `aarch64-unknown-linux-ohos` and `armv7-unknown-linux-ohos` targets.
This should be covered by the existing MCP (https://github.com/rust-lang/compiler-team/issues/568), but I can also create a new MCP if that is preferred.
Add support for allocators in `Rc` & `Arc`
Adds the ability for `std::rc:Rc`, `std::rc::Weak`, `std::sync::Arc`, and `std::sync::Weak` to live in custom allocators
miri: fail when calling a function that requires an unavailable target feature
miri will report an UB when calling a function that has a `#[target_feature(enable = ...)]` attribute is called and the required feature is not available.
"Available features" are the same that `is_x86_feature_detected!` (or equivalent) reports to be available during miri execution (which can be enabled or disabled with the `-C target-feature` flag).
CI: build CMake 3.20 to support LLVM 17
LLVM 17 will require CMake at least 3.20, so we have to go back to building our own CMake on the Linux x64 dist builder.
r? `@nikic`
Support reading uncompressed proc macro metadata
rust-lang/rust#113695 makes the dylib metadata uncompressed for perf reasons. This commit allows reading both the current compressed and future uncompressed dylib metadata.
rust-lang/rust#113695 makes the dylib metadata uncompressed for perf
reasons. This commit allows reading both the current compressed and
future uncompressed dylib metadata.
Generate safe stable code for derives on empty enums
Generate `match *self {}` instead of `unsafe { core::intrinsics::unreachable() }`.
This is:
1. safe
2. stable
for the benefit of everyone looking at these derived impls through `cargo expand`.
[Both expansions compile to the same code at all optimization levels (including `0`).](https://rust.godbolt.org/z/P79joGMh3)
Add a sparc-unknown-none-elf target.
# `sparc-unknown-none-elf`
**Tier: 3**
Rust for bare-metal 32-bit SPARC V7 and V8 systems, e.g. the Gaisler LEON3.
## Target maintainers
- Jonathan Pallant, `jonathan.pallant@ferrous-systems.com`, https://ferrous-systems.com
## Requirements
> Does the target support host tools, or only cross-compilation?
Only cross-compilation.
> Does the target support std, or alloc (either with a default allocator, or if the user supplies an allocator)?
Only tested with `libcore` but I see no reason why you couldn't also support `liballoc`.
> Document the expectations of binaries built for the target. Do they assume
specific minimum features beyond the baseline of the CPU/environment/etc? What
version of the OS or environment do they expect?
Tested by linking with a standard SPARC bare-metal toolchain - specifically I used the [BCC2] toolchain from Gaisler (both GCC and clang variants, both pre-compiled for x64 Linux and compiling my own SPARC GCC from source to run on `aarch64-apple-darwin`).
The target is set to use the lowest-common-denominator `SPARC V7` architecture (yes, they started at V7 - see [Wikipedia](https://en.wikipedia.org/wiki/SPARC#History)).
[BCC2]: https://www.gaisler.com/index.php/downloads/compilers
> Are there notable `#[target_feature(...)]` or `-C target-feature=` values that
programs may wish to use?
`-Ctarget-cpu=v8` adds the instructions added in V8.
`-Ctarget-cpu=leon3` adds the V8 instructions and sets up scheduling to suit the Gaisler LEON3.
> What calling convention does `extern "C"` use on the target?
I believe this is defined by the SPARC architecture reference manuals and V7, V8 and V9 are all compatible.
> What format do binaries use by default? ELF, PE, something else?
ELF
## Building the target
> If Rust doesn't build the target by default, how can users build it? Can users
just add it to the `target` list in `config.toml`?
Yes. I did:
```toml
target = ["aarch64-apple-darwin", "sparc-unknown-none-elf"]
```
## Building Rust programs
> Rust does not yet ship pre-compiled artifacts for this target. To compile for
this target, you will either need to build Rust with the target enabled (see
"Building the target" above), or build your own copy of `core` by using
`build-std` or similar.
Correct.
## Testing
> Does the target support running binaries, or do binaries have varying
expectations that prevent having a standard way to run them?
No - it's a bare metal platform.
> If users can run binaries, can they do so in some common emulator, or do they need native
hardware?
But if you use [BCC2] as the linker, you get default memory map suitable for the LEON3, and a default BSP for the LEON3, and so you can run the binaries in the `tsim-leon3` simulator from Gaisler.
```console
$ cat .cargo/config.toml | grep runner
runner = "tsim-leon3 -c sim-commands.txt"
$ cat sim-commands.txt
run
quit
$ cargo +sparcrust run --targe=sparc-unknown-none-elf
Compiling sparc-demo-rust v0.1.0 (/work/sparc-demo-rust)
Finished dev [unoptimized + debuginfo] target(s) in 3.44s
Running `tsim-leon3 -c sim-commands.txt target/sparc-unknown-none-elf/debug/sparc-demo-rust`
TSIM3 LEON3 SPARC simulator, version 3.1.9 (evaluation version)
Copyright (C) 2023, Frontgrade Gaisler - all rights reserved.
This software may only be used with a valid license.
For latest updates, go to https://www.gaisler.com/
Comments or bug-reports to support@gaisler.com
This TSIM evaluation version will expire 2023-11-28
Number of CPUs: 2
system frequency: 50.000 MHz
icache: 1 * 4 KiB, 16 bytes/line (4 KiB total)
dcache: 1 * 4 KiB, 16 bytes/line (4 KiB total)
Allocated 8192 KiB SRAM memory, in 1 bank at 0x40000000
Allocated 32 MiB SDRAM memory, in 1 bank at 0x60000000
Allocated 8192 KiB ROM memory at 0x00000000
section: .text, addr: 0x40000000, size: 104400 bytes
section: .rodata, addr: 0x400197d0, size: 15616 bytes
section: .data, addr: 0x4001d4d0, size: 1176 bytes
read 1006 symbols
Initializing and starting from 0x40000000
Hello, this is Rust!
PANIC: PanicInfo { payload: Any { .. }, message: Some(I am a panic), location: Location { file: "src/main.rs", line: 33, col: 5 }, can_unwind: true }
Program exited normally on CPU 0.
```
> Does the target support running the Rust testsuite?
I don't think so, the testsuite requires `libstd` IIRC.
## Cross-compilation toolchains and C code
> Does the target support C code?
Yes.
> If so, what toolchain target should users use to build compatible C code? (This may match the target triple, or it may be a toolchain for a different target triple, potentially with specific options or caveats.)
I suggest [BCC2] from Gaisler. It comes in both GCC and Clang variants.
Use u64 for incr comp allocation offsets
Fixes https://github.com/rust-lang/rust/issues/76037
Fixes https://github.com/rust-lang/rust/issues/95780
Fixes https://github.com/rust-lang/rust/issues/111613
These issues are all reporting ICEs caused by using `u32` to store offsets to allocations in the incremental compilation cache. This PR aims to lift that limitation by changing the offset type in question to `u64`.
There are two perf runs in this PR. The first reports a regression, and the second does not. The changes are the same in both. I rebased the PR then did the second perf run because I noticed that the primary regression in it was very commonly seen in spurious regression reports.
I do not know what the perf run will report when this is merged. I would not be surprised to see regression or neutral, but the cachegrind diffs for the regression point at `try_mark_previous_green` which is a common source of inexplicable regressions and I don't think should be perturbed by this PR.
I'm not opposed to adding a regression test such as
```rust
fn main() {
println!("{}", [37; 1 << 30].len());
}
```
But that program takes 1 minute to compile and consumes 4.6 GB of memory then writes that much to disk. Is that a concerning amount of resource use for a test?
r? `@nnethercote`
Streamline size estimates (take 2)
This was merged in #113684 but then [something happened](https://github.com/rust-lang/rust/pull/113684#issuecomment-1636811985):
> There has been a bors issue that lead to the merge commit of this PR getting purged from master.
> You'll have to make a new PR to reapply it.
So this is exactly the same changes.
`@bors` r=wesleywiser
Add support for inherent projections in new solver
Not hard to support these, and it cuts out a really big chunk of failing UI tests with `--compare-mode=next-solver`
r? `@lcnr` (feel free to reassign, anyone can review this)
They're quite rare, and ignoring them simplifies things quite a bit, and
further reduces the number of calls to `MonoItem::size_estimate` to the
number of placed items (one per root item, and one or more per reachable
inlined item).
This means we call `MonoItem::size_estimate` (which involves a query)
less often: just once per mono item, and then once more per inline item
placement. After that we can reuse the stored value as necessary. This
means `CodegenUnit::compute_size_estimate` is cheaper.
Add Platform Support documentation for MIPS Release 6 targets
This is a follow-up to our to-announce MCP, rust-lang/compiler-team#638, where we proposed to assign several maintainers for MIPS R6 targets and was told to explain that this set of targets are experimental in nature.
This documentation describes Rust support for `mipsisa*r6*-unknown-linux-gnu*` targets (mainly `mipsisa64r6el-unknown-linux-gnuabi64`), including toolchain setup, building, and testing procedures.
Generate `match *self {}` instead of `unsafe { core::intrinsics::unreachable() }`.
This is:
1. safe
2. stable
for the benefit of everyone looking at these derived impls through `cargo expand`.
Both expansions compile to the same code at all optimization levels (including `0`).
Don't call `predicate_must_hold`-esque functions during fulfillment in intercrate
Fixes#113415
Given that this only happens in `translate_substs`, I don't actually think that this is something that you can weaponize, but it's still sketchy regardless.
r? `@lcnr`
don't hide lifetimes for `LateContext`
Running `cargo dev new_lint --type methods` creates the lint file with hidden lifetimes for the `LateContext` parameter (i.e. `&LateContext`, when it should be `&LateContext<'_>`). This is already warned on with `#![warn(rust_2018_idioms)]`, so clippy should not use hidden lifetimes
changelog: none
Check entry type as part of item type checking.
This code is currently executed inside the root `analysis` query.
Instead, check it during `check_for_entry_fn(CRATE_DEF_ID)` to hopefully avoid some re-executions.
`CRATE_DEF_ID` is chosen by considering that entry fn are typically at crate root, so the corresponding HIR should already be in the dependencies.