correctly update goals in the cache
we may want to actually write the response for our goal into the provisional or global cache instead of simply using the result from the last iteration '^^
r? ```@rust-lang/initiative-trait-system-refactor```
x.py fails all downloads that use a tempdir with snap curl #107722
Have used the open() library from python to capture the binary output of the curl command and write it to a file using stdout of the subprocess. Added a single-line comment mentioning the redirect operator.
Extend `BYTE_SLICE_IN_PACKED_STRUCT_WITH_DERIVE`.
To temporarily allow a `str` field in a packed struct using `derive`, along with `[u8]`.
r? ``@RalfJung``
Stabilize feature `cstr_from_bytes_until_nul`
This PR seeks to stabilize `cstr_from_bytes_until_nul`.
Partially addresses #95027
This function has only been on nightly for about 10 months, but I think it is simple enough that there isn't harm discussing stabilization. It has also had at least a handful of mentions on both the user forum and the discord, so it seems like it's already in use or at least known.
This needs FCP still.
Comment on potential discussion points:
- eventual conversion of `CStr` to be a single thin pointer: this function will still be useful to provide a safe way to create a `CStr` after this change.
- should this return a length too, to address concerns about the `CStr` change? I don't see it as being particularly useful, and it seems less ergonomic (i.e. returning `Result<(&CStr, usize), FromBytesUntilNulError>`). I think users that also need this length without the additional `strlen` call are likely better off using a combination of other methods, but this is up for discussion
- `CString::from_vec_until_nul`: this is also useful, but it doesn't even have a nightly implementation merged yet. I propose feature gating that separately, as opposed to blocking this `CStr` implementation on that
Possible alternatives:
A user can use `from_bytes_with_nul` on a slice up to `my_slice[..my_slice.iter().find(|c| c == 0).unwrap()]`. However; that is significantly less ergonomic, and is a bit more work for the compiler to optimize compared the direct `memchr` call that this wraps.
## New stable API
```rs
// both in core::ffi
pub struct FromBytesUntilNulError(());
impl CStr {
pub const fn from_bytes_until_nul(
bytes: &[u8]
) -> Result<&CStr, FromBytesUntilNulError>
}
```
cc ```@ericseppanen``` original author, ```@Mark-Simulacrum``` original reviewer, ```@m-ou-se``` brought up some issues on the thin pointer CStr
```@rustbot``` modify labels: +T-libs-api +needs-fcp
Now that the compiler accepts "-Z instrument-xray" option only when
targeting one of the supported targets, make sure to not run the
codegen tests where the compiler will fail.
Like with other compiletests, we don't have access to internals,
so simply hardcode a list of supported architectures here.
This is somewhat important because LLVM enables the pass based on
target architecture, but support by the target OS also matters.
For example, XRay attributes are processed by codegen for macOS
targets, but Apple linker fails to process relocations in XRay
data sections, so the feature as a whole is not supported there
for the time being.
Specify where XRay is supported. I only test ARM64 and x86_64, but hey
those others should work too, right? LLVM documentation says that MIPS
and PPC are also supported, but I don't have the hardware, so I won't
pretend. Naturally, more targets can be added later with more testing.
Let's add at least some tests to verify that this option is accepted
and produces expected LLVM attributes. More tests can be added later
with attribute support.
Add the attributes to functions according to the settings.
"xray-always" overrides "xray-never", and they both override
"xray-ignore-loops" and "xray-instruction-threshold", but we'll
let lints deal with warnings about silly attribute combinations.
Recognize all bells and whistles that LLVM's XRay pass is capable of.
The always/never settings are a bit dumb without attributes but they're
still there. The default instruction count is chosen by the compiler,
not LLVM pass. We'll do it later.
Negate suggestions when needed in `bool_assert_comparison`
changelog: none assuming this gets into the same release as #10218Fixes#10291
r? `@dswij`
Thanks to `@black-puppydog` for spotting it early
Rename `replace_bound_vars_with_*` to `instantiate_binder_with_*`
Mentioning "binder" rather than "bound vars", imo, makes it clearer that we're doing something to the binder as a whole.
Also, "instantiate" is the verb that I'm always reaching for when I'm looking for these functions, and the name that we use in the new solver anyways.
r? types
Make `derive_const` derive properly const-if-const impls
Fixes#107774Fixes#107666
Also fixes rendering of const-if-const bounds in pretty printing.
r? ```@oli-obk``` or ```@fee1-dead```
Change `arena_cache` to not alter the declared query result
This makes the return types a bit clearer, limiting `arena_cache`'s effect to just the computation side. It also makes it easier to potentially remove `arena_cache`.
r? ```@cjgillot```
Update strip-ansi-escapes and vte
This updates strip-ansi-escapes from 0.1.0 to 0.1.1 (and consequently vte).
Changes: https://github.com/luser/strip-ansi-escapes/compare/0.1.0...0.1.1
The only change really is updating vte which fixes some parsing issues (and drops the vendored source size by several megabytes).
Closes#107708
Treat Drop as a rmw operation
Previously, a Drop terminator was considered a move in MIR. This commit changes the behavior to only treat Drop as a mutable access to the dropped place.
In order for this change to be correct, we need to guarantee that
1. A dropped value won't be used again
2. Places that appear in a drop won't be used again before a
subsequent initialization.
We can ensure this to be correct at MIR construction because Drop will only be emitted when a variable goes out of scope, thus having:
* (1) as there is no way of reaching the old value. drop-elaboration
will also remove any uninitialized drop.
* (2) as the place can't be named following the end of the scope.
However, the initialization status, previously tracked by moves, should also be tied to the execution of a Drop, hence the additional logic in the dataflow analyses.
From discussion in [this thread](https://rust-lang.zulipchat.com/#narrow/stream/233931-t-compiler.2Fmajor-changes/topic/.60DROP.60.20to.20.60DROP_IF.60.20compiler-team.23558), originating from https://github.com/rust-lang/compiler-team/issues/558.
See also https://github.com/rust-lang/rust/pull/104488#discussion_r1085556010
Implement cursors for BTreeMap
See the ACP for an overview of the API: https://github.com/rust-lang/libs-team/issues/141
The implementation is split into 2 commits:
- The first changes the internal insertion functions to return a handle to the newly inserted element. The lifetimes involved are a bit hairy since we need a mutable handle to both the `BTreeMap` itself (which holds the root) and the nodes allocated in memory. I have tested that this passes the standard library testsuite under miri.
- The second commit implements the cursor API itself. This is more straightforward to follow but still involves some unsafe code to deal with simultaneous mutable borrows of the tree root and the node that is currently being iterated.
Support DidChangeWorkspaceFolders notifications
This PR enables the `WorkspaceFoldersServerCapabilities` capability for rust-analyzer and implemented support for the associated [`DidChangeWorkspaceFolders`](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workspace_didChangeWorkspaceFolders) notification to allow clients to update the list of `workspaceFolders` sent during initialization.
## Motivation
This allows clients which lazily autodiscover their workspace roots (like the [helix editor](https://github.com/helix-editor/helix) once [my PR](https://github.com/helix-editor/helix/pull/5748) lands) avoid spawning multiple instances of RA. Right now such clients are forced to either:
* greedily discover all LSP roots in the workspace (precludes the ability to respond to new workspace roots)
* spawn multiple instance of rust-analyzer (one for each root)
* restart rust-analyzer whenever a new workspace is added
Some example use-cases are shown [here](https://github.com/helix-editor/helix/pull/5748#issuecomment-1421012523).
This PR will also improve support for VSCode (and Atom) multi workspaces.
## Implementation
The implementation was fairly straightforward as `rust-analyzer` already supports dynamically reloading workspaces, for example on configuration changes. Furthermore, rust-analyzer also already supports auto-discovering internal workspace from the `workspaceFolders` key in the initialization request. Therefore, the necessary logic just needed to be moved to a central place and reused.