Github action to periodically `cargo update` to keep dependencies current
Opens a PR periodically with the results of `cargo update`. If an unmerged PR for the branch `cargo_update` already exists, it will edit then reopen it if necessary.
~~This also uses [`cargo-upgrades`](https://gitlab.com/kornelski/cargo-upgrades) to provide a list of available major upgrades in the PR body.~~
It includes the list of changes output by `cargo update` in the commit message and PR body. Note that this output is currently sub-optimal due to https://github.com/rust-lang/cargo/issues/9408, but if updates are made more regularly that is less likely to show up.
Example PR: https://github.com/pitaj/rust/pull/2
Example action run: https://github.com/pitaj/rust/actions/runs/5035731903
Prior discussion: https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/dependabot.20updates.3F
Up for discussion:
- What period do we want? Currently weekly
- What user should it use? Currently "Github Actions"
- Do we need the extra security of provided by executing `cargo update` and `cargo-upgrades` in a separate job?
If not I can simplify it to not need artifacts.
- PR message wording
- PR should probably always be `rollup=always`?
- What branch should it use?
- What should it do if no updates are available? Currently fails the job on empty commit
- Should the yml file live in `src/ci` instead of directly under workflows?
- ~~Is using the latest nightly toolchain enough to ensure compatibility with `Cargo.lock` and `Cargo.toml`s in master?~~
Now pulls the bootstrap version from stage0.json
r? infra
- Keep Cargo.lock dependencies current
- Presents output from `cargo update` in commit and PR
- Edit existing open PR, otherwise open a new one
- Skip if existing open PR is S-waiting-on-bors
Remove aws cli install.
All runner images have the AWS CLI 2 installed, so there isn't a really strong reason to install our own version anymore.
The version we were installing was 1.27.122. The runner images currently have 2.11.x (the exact version varies by image).
I do not have the means to really test if the new version has any issues. I looked at all the `aws` commands, and none of them seem to be doing anything unusual. The page at https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-changes.html contains a list of all the breaking changes, and I didn't see anything that looked important.
Optimize builder sizes
The infra-team is continuously monitoring the efficiency of the CI system in an effort to improve overall build times and resource usage. Some builders have used much less than their allocated resources, so we are testing smaller builder sizes for them.
r? `@pietroalbini`
The infra-team is continuously monitoring the efficiency of the build
system in an effort to improve overall build times and resource usage.
The builders for some of the `x86_64-gnu` targets have used much less
resources than allocated in the past, so we are testing a smaller
builder size for them.
The infra-team is continuously monitoring the efficiency of the build
system in an effort to improve overall build times and resource usage.
The builder for the `mingw-check` target have used much less resources
than allocated in the past, so we are testing a smaller builder size for
it.
The infra-team is continuously monitoring the efficiency of the build
system in an effort to improve overall build times and resource usage.
The builders for the `i686-gnu` targets have used much less resources
than allocated in the past, so we are testing a smaller builder size for
them.
Spelling misc
These two files seem to be fairly distinct from everything else.
That said, if this project doesn't like changing changelogs, I'm happy to drop the changes to `RELEASES.md`
Like #107044, this will let us track compatibility with LLVM 16 going
forward, especially after we eventually upgrade our own to the next.
This also drops `tidy` here and in `x86_64-gnu-llvm-15`, syncing with
that change in #106085.
These builders aren't particularly high on overall average CPU usage and finish in typically around
30 minutes. Cutting their core counts will hopefully not significantly increase wall-time while
cutting costs, allowing us to shift some of the wins into our slower builders.
For now this keeps all the configuration identical (AFAICT) but we'll
likely want to play with the specifics to move some of the slower
builders to larger machines and the faster builders to smaller machines,
likely reducing overall usage and improving CI times.
Previously, it would only run on changes to subtrees, submodules, or select directories.
That made it so that changes to the compiler that broke tools would only be detected on a full bors merge.
This makes it so the tools builder runs by default, making it easier to catch breaking changes to clippy (which was the most effected).
Enable Cargo's sparse protocol in CI
This enables the sparse protocol in CI in order to exercise and dogfood it. This is intended test the production server in a real-world situation.
Closes#107342
Include both md and yaml ICE ticket templates
* Existing compilers link to the md version
* The YAML version field for the backtrace *doesn't let us paste a full backtrace*
* We will need the YAML version in order to be able to submit reports once we start storing the backtrace to disk
Follow up to #106831. Reaction to #106874, which made me realize that *really* long backtraces are rejected by GitHub Forms. A single backtrace won't hit this, but ICEs sometimes compound.
Port pgo.sh to Python
This PR ports the `pgo.sh` multi stage build file from bash to Python, to make it easier to add new functionality and gather statistics. Main changes:
1) `pgo.sh` rewritten from Bash to Python. Jump from ~200 Bash LOC to ~650 Python LOC. Bash is, unsurprisingly, more concise for running scripts and binaries.
2) Better logging. Each separate stage is now clearly separated in logs, and the logs can be quickly grepped to find out which stage has completed or failed, and how long it took.
3) Better statistics. At the end of the run, there is now a table that shows the duration of the individual stages, along with a percentual ratio of the total workflow run:
```
2023-01-15T18:13:49.9896916Z stage-build INFO: Timer results
2023-01-15T18:13:49.9902185Z ---------------------------------------------------------
2023-01-15T18:13:49.9902605Z Build rustc (LLVM PGO): 1815.67s (21.47%)
2023-01-15T18:13:49.9902949Z Gather profiles (LLVM PGO): 418.73s ( 4.95%)
2023-01-15T18:13:49.9903269Z Build rustc (rustc PGO): 584.46s ( 6.91%)
2023-01-15T18:13:49.9903835Z Gather profiles (rustc PGO): 806.32s ( 9.53%)
2023-01-15T18:13:49.9904154Z Build rustc (LLVM BOLT): 1662.92s (19.66%)
2023-01-15T18:13:49.9904464Z Gather profiles (LLVM BOLT): 715.18s ( 8.46%)
2023-01-15T18:13:49.9914463Z Final build: 2454.00s (29.02%)
2023-01-15T18:13:49.9914798Z Total duration: 8457.27s
2023-01-15T18:13:49.9915305Z ---------------------------------------------------------
```
A sample run can be seen [here](https://github.com/rust-lang/rust/actions/runs/3923980164/jobs/6707932029).
I tried to keep the code compatible with Python 3.6 and don't use dependencies, which required me to reimplement some small pieces of functionality (like formatting bytes). I suppose that it shouldn't be so hard to upgrade to a newer Python or install dependencies in the CI container, but I'd like to avoid it if it won't be needed.
The code is in a single file `stage-build.py`, so it's a bit cluttered. I can also separate it into multiple files, although having it in a single file has some benefits. The code could definitely be nicer, but I'm a bit wary of introducing a lot of abstraction and similar stuff, as long as the code is stuffed into a single file.
Currently, the Python pipeline should faithfully mirror the bash pipeline one by one. After this PR, I'd like to try to optimize it, e.g. by caching the LLVM builds on S3.
r? `@Mark-Simulacrum`
* Existing compilers link to the md version
* The YAML version field for the backtrace *doesn't let us paste a full backtrace*
* We will need the YAML version in order to be able to submit reports once we start storing the backtrace to disk
This duplicates mingw-check into two jobs where one job
runs `tidy` only while the other job does not. The tidy
job will not cancel other jobs on failure.
Enable ThinLTO for rustc on `x86_64-apple-darwin`
Local measurements seemed to show an improvement on a couple benchmarks, so I'd like to test real CI builds, and see if the builder doesn't timeout with the expected slight increase in build times.
Let's start with x64 rustc ThinLTO, and then figure out the file structure to configure LLVM ThinLTO. Maybe we'll then try `aarch64` builds since that also looked good locally.
Enable ThinLTO for rustc on x64 msvc
This applies the great work from `@bjorn3` and `@Kobzol` in https://github.com/rust-lang/rust/pull/101403 to x64 msvc.
Here are the local results for the try build `68c5c85ed759334a11f0b0e586f5032a23f85ce4`, compared to its parent `0a6b941df354c59b546ec4c0d27f2b9b0cb1162c`. Looking better than my previous local builds.
![image](https://user-images.githubusercontent.com/247183/198158039-98ebac0e-da0e-462e-8162-95e88345edb9.png)
(I can't show cycle counts, as that option is failing on the windows version of the perf collector, but I'll try to analyze and debug this soon)
This will be the first of a few tests for rustc / llvm / both ThinLTO on the windows and mac targets.
Distribute bootstrap in CI
This pre-compiles bootstrap from source and adds it to the existing `rust-dev` component. There are two main goals here:
1. Make it faster to build rust from source, both the first time and incrementally
2. Make it easier to add non-python entrypoints, since they can call out to bootstrap directly rather than having to figure out the right flags to pre-compile it. This second part is still in a bit of flux, see the tracking issue below for more information.
There are also several changes to make bootstrap able to run on a machine other than the one it was built (particularly around `config.src` and `config.out` detection). I (`@jyn514)` am slightly concerned these will regress unless tested - maybe we should add an automated test that runs bootstrap in a chroot or something? Unclear whether the effort is worth the test coverage.
Helps with https://github.com/rust-lang/rust/issues/94829.
- Add a new `bootstrap` component
Originally, we planned to combine this with the `rust-dev` component.
However, I realized that would force LLVM to be redownloaded whenever bootstrap is modified.
LLVM is a much larger download, so split this to get better caching.
- Build bootstrap for all tier 1 and 2 targets
See comment added for details on the test builder restriction. This is primarily
intended for macOS CI, but is likely to be a slight win on other builders too.
update: actions/checkout@v2 to actions/checkout@v3 for all yaml files
Revert "update: actions/checkout@v2 to actions/checkout@v3 for all yaml files"
This reverts commit 7445e582b900f0f56f5f2bd9036aacab97ef28e9.
change GitHub Actions version v2 to v3
change GitHub Actions
For some reason, `tar` behaves differently in such a way that it does
not create symlinks on Windows correctly, resulting in
`Cannot create symlink to 'ld.gold': No such file or directory`
errors.
This builder is the slowest in the fleet. This should cut a considerable
amount of time. The manifest should now include the docs from
x86_64-apple-darwin. Although those docs are slightly different, it
should be close enough. When aarch64-apple-darwin heads towards tier 1,
we can revisit whether or not to re-enable the docs.
Before, you could have the confusing situation where the command to
generate a component had no relation to the name of that component (e.g.
the `rustc` component was generated with `src/librustc`). This changes
the name to make them match up.
Skip documentation for tier 2 targets on dist-x86_64-apple-darwin
I don't have an easy way to test this locally, but I believe it should work. Based on one log result should shave ~14 minutes off the dist-x86_64-apple builder (doesn't help with aarch64 dist or x86_64 test builder, so not actually decreasing total CI time most likely).
r? ```@pietroalbini```
CI: Enable overflow checks for test (non-dist) builds
They stay disabled for Apple builds though, which take the most time already due to running on slow hw.
Build aarch64-apple-ios-sim as part of the full macOS build
Part of the [MCP 428](https://github.com/rust-lang/compiler-team/issues/428) to promote this target to Tier 2.
This adds the aarch64-apple-ios-sim target as a tier 2 target, currently cross-compiled from our x86_64 apple builders. The compiler team has approved the addition per the MCP noted above, and the infrastructure team has not raised concerns with this addition at this time (as the CI time impact is expected to be minimal; this is only building std).
During the 1.52 release process we had to deal with some commits that
passed the test suite on the nightly branch but failed on the beta or
stable branch. In that case it was due to some UI tests including the
channel name in the output, but other changes might also be dependent on
the channel.
This commit adds a new CI job that runs the Linux x86_64 test suite with
the stable branch, ensuring nightly changes also work as stable.
Each label needs to be separated by a comma (see the ICE issue template
for an example of correct usage).
As a result of this problem, the `regression-untriaged` label has not
been automatically added to issues opened with this template.
See c127530be7 for another example of this.
Demote i686-unknown-freebsd to tier 2 compiler target
While technically the `i686-unknown-freebsd` target has been a tier 2 development platform for a long time, with full toolchain tarballs available on static.rust-lang.org, due to a bug in the manifest generation the target was never available for download through rustup.
The infrastructure team privately inquired the FreeBSD package maintainers, and they weren't relying on those tarballs either, so it's a fair assumption to say practically nobody is using those tarballs.
This PR then removes the CI builder that produces full tarballs for the target, and moves the compilation of `rust-std` for the target in `dist-various-2`. The `x86_64-unknown-freebsd` target is *not* affected.
cc `@rust-lang/infra` `@rust-lang/compiler` `@rust-lang/release`
r? `@Mark-Simulacrum`
Promote aarch64-unknown-linux-gnu to Tier 1
This PR promotes the `aarch64-unknown-linux-gnu` target to Tier 1, as proposed by [RFC 2959]:
* The `aarch64-gnu` CI job is moved from `auto-fallible` to `auto`.
* The platform support documentation is updated, uplifting the target to Tiert 1 with a note about missing stack probes support.
* Building the documentation is enabled for the target, as we produce the `rust-docs` component for all Tier 1 platforms.
[RFC 2959]: https://github.com/rust-lang/rfcs/pull/2959
Historically we've disabled these assertions on a number of platforms with the
goal of speeding up CI. Now, though, having migrated to GitHub actions, CI is
already pretty fast, and these debug assertions do bring us some value.
This does leave in some debug assertions that are performance-related: macOS
currently hovers at just under 2 hours.
There are also some other builders which have debug and LLVM assertions
disabled:
llvm-8, PR builder:
In one view, this builder tests our support for older LLVMs. But in reality, a
lot of our tests already disable themselves on older LLVMs, and I think our
general stance is that we really only support the in-tree LLVM. Plus, we really
want CI times on this builder to be really low, as it's run on *every* PR --
that's a lot of CI time.
test-various:
This disables debug asserts still -- as noted in the Dockerfile, we test code
size, and we need debug asserts off for that to work well.
This was recommended by GitHub Support to try reducing the things that
could've caused #78743. I checked the changelog and there should be no
practical impact for us (we already set an explicit fetch-depth).
While technically the i686-unknown-freebsd target has been a tier 2
development platform for a long time, with full toolchain tarballs
available on static.rust-lang.org, due to a bug in the manifest
generation the target was never available for download through rustup.
The infrastructure team privately inquired the FreeBSD package
maintainers, and they weren't relying on those tarballs either, so it's
a fair assumption to say practically nobody is using those tarballs.
This PR then removes the CI builder that produces full tarballs for the
target, and moves the compilation of rust-std for the target in
dist-various-2.
The x86_64-unknown-freebsd target is *not* affected.
Promote aarch64-pc-windows-msvc to Tier 2 Development Platform
Adds a GitHub Actions CI build for `aarch64-pc-windows-msvc` via cross-compilation on an x86_64 host.
This promotes `aarch64-pc-windows-msvc` from a Tier 2 Compilation Target (std) to a Tier 2 Development Platform (std+rustc+cargo+tools).
Fixes#72881
r? `@pietroalbini`
Set ninja=true by default
Ninja substantially improves LLVM build time. On a 96-way system, using
Make took 248s, and using Ninja took 161s, a 35% improvement.
We already require a variety of tools to build Rust. If someone wants to
build without Ninja (for instance, to minimize the set of packages
required to bootstrap a new target), they can easily set `ninja=false`
in `config.toml`. Our defaults should help people build Rust (and LLVM)
faster, to speed up development.
The purpose of the auto-fallible job is to run builders that are likely
to fail on CI without gating on them. Having fail-fast enabled there
kinda defeats the purpose, as if one of them fails we can't monitor the
outcome of the other ones.
This was prompted by the aarch64-gnu builder consistently failing due to
a broken test, preventing us from seeing if the macOS spurious failure
is fixed.
Add fallible AArch64 CI builder
This adds the `aarch64-gnu` CI builder to the `auto-fallible` job, as a first step in the process of actually gating on it.
r? @Mark-Simulacrum