Updated all call sites that used the other contructors to uniformly
call `Lvalue::new_with_hint`, passing along the appropriate kind
of hint for each context.
Placated tidy in a few other places in datum.rs.
Added code to maintain these hints at runtime, and to conditionalize
drop-filling and calls to destructors.
In this early stage, we are using hints, so we are always free to
leave out a flag for a path -- then we just pass `None` as the
dropflag hint in the corresponding schedule cleanup call. But, once a
path has a hint, we must at least maintain it: i.e. if the hint
exists, we must ensure it is never set to "moved" if the data in
question might actually have been initialized. It remains sound to
conservatively set the hint to "initialized" as long as the true
drop-flag embedded in the value itself is up-to-date.
----
Here are some high-level details I want to point out:
* We maintain the hint in Lvalue::post_store, marking the lvalue as
moved. (But also continue drop-filling if necessary.)
* We update the hint on ExprAssign.
* We pass along the hint in once closures that capture-by-move.
* You only call `drop_ty` for state that does not have an associated hint.
If you have a hint, you must call `drop_ty_core` instead.
(Originally I passed the hint into `drop_ty` as well, to make the
connection to a hint more apparent, but the vast majority of
current calls to `drop_ty` are in contexts where no hint is
available, so it just seemed like noise in the resulting diff.)
Instrumented calls sites that construct Lvalues to ease tracking down
cases that we might need to change whether or not they carry a hint.
Note that this commit does not do anything to actually *construct*
the `lldropflag_hints` map, nor does it change anything about codegen
itself. Those parts are in follow-on commits.
(already thumbs-upped pre-rebase by nikomatsakis)
The refactoring here is trivial because `trans::datum::Lvalue`
currently carries no payload. However, future commits will start
adding a payload to `Lvalue`, and thus will force us either
1. to thread the payload through the `_match` code (a long term goal), or
2. to ensure the payload has some reasonable default value.
The "hint" mechanism is essentially used as a workaround to compute
types for expressions which have not yet been type-checked. This
commit clarifies that usage, and limits the effects to the places
where it is currently necessary.
Fixes#26210.
If we match a whole struct or tuple, the "field" for the reassignment
checker will be "None" which means that mutating any field should count
as a reassignment.
Fixes#26996.
The current split between create_datums_for_fn_args,
copy_args_to_allocas and store_arg involves a detour via rvalue datums
which cause additional work in form of insertvalue/extractvalue pairs
for fat pointer arguments, and an extra alloca and memcpy for tupled
args in rust-call functions.
By merging those three functions into just one that actually covers the
whole process of creating the final argument datums, we can skip all
that. Also, this allows to easily merge in the handling of rust-call
functions, allowing to make create_datum_for_fn_args_under_call_abi
obsolete.
cc #26600 -- The insertvalue instructions kicked us off of fast-isel.
This has a number of advantages compared to creating a copy in memory
and passing a pointer. The obvious one is that we don't have to put the
data into memory but can keep it in registers. Since we're currently
passing a pointer anyway (instead of using e.g. a known offset on the
stack, which is what the `byval` attribute would achieve), we only use a
single additional register for each fat pointer, but save at least two
pointers worth of stack in exchange (sometimes more because more than
one copy gets eliminated). On archs that pass arguments on the stack, we
save a pointer worth of stack even without considering the omitted
copies.
Additionally, LLVM can optimize the code a lot better, to a large degree
due to the fact that lots of copies are gone or can be optimized away.
Additionally, we can now emit attributes like nonnull on the data and/or
vtable pointers contained in the fat pointer, potentially allowing for
even more optimizations.
This results in LLVM passes being about 3-7% faster (depending on the
crate), and the resulting code is also a few percent smaller, for
example:
text data filename
5671479 3941461 before/librustc-d8ace771.so
5447663 3905745 after/librustc-d8ace771.so
1944425 2394024 before/libstd-d8ace771.so
1896769 2387610 after/libstd-d8ace771.so
I had to remove a call in the backtrace-debuginfo test, because LLVM can
now merge the tails of some blocks when optimizations are turned on,
which can't correctly preserve line info.
Fixes#22924
Cc #22891 (at least for fat pointers the code is good now)
Closes#25046 (by rejecting the code that causes the ICE) and #24946. I haven't been able to deal with the array size or recursion issues yet for associated consts, though my hope was that the change I made for range match patterns might help with array sizes, too.
This PR is pretty much orthogonal to #25065.
Inspect enum discriminant *after* calling its destructor
Includes some drive-by cleanup (e.g. changed some field and method names to reflect fill-on-drop; added comments about zero-variant enums being classified as `_match::Single`).
Probably the most invasive change was the expansion of the maps `available_drop_glues` and `drop_glues` to now hold two different kinds of drop glues; there is the (old) normal drop glue, and there is (new) drop-contents glue that jumps straight to dropping the contents of a struct or enum, skipping its destructor.
* For all types that do not have user-defined Drop implementations, the normal glue is generated as usual (i.e. recursively dropping the fields of the data structure).
(And this actually is exactly what the newly-added drop-contents glue does as well.)
* For types that have user-defined Drop implementations, the "normal" drop glue now schedules a cleanup before invoking the `Drop::drop` method that will call the drop-contents glue after that invocation returns.
Fix#23611.
----
Is this a breaking change? The prior behavior was totally unsound, and it seems unreasonable that anyone was actually relying on it.
Nonetheless, since there is a user-visible change to the language semantics, I guess I will conservatively mark this as a:
[breaking-change]
(To see an example of what sort of user-visible change this causes, see the comments in the regression test.)
That is, scheduled drops are executed in reverse order, so for
correctness, we *schedule* the lifetime end before we schedule the
drop, so that when they are executed, the drop will be executed
*before* the lifetime end.
without invoking the Drop::drop implementation.
This is necessary for dealing with an enum that switches own `self` to
a different variant while running its destructor.
Fix#23611.
In addition to being nicer, this also allows you to use `sum` and `product` for
iterators yielding custom types aside from the standard integers.
Due to removing the `AdditiveIterator` and `MultiplicativeIterator` trait, this
is a breaking change.
[breaking-change]
Refactored code so that the drop-flag values for initialized
(`DTOR_NEEDED`) versus dropped (`DTOR_DONE`) are given explicit names.
Add `mem::dropped()` (which with `DTOR_DONE == 0` is semantically the
same as `mem::zeroed`, but the point is that it abstracts away from
the particular choice of value for `DTOR_DONE`).
Filling-drop needs to use something other than `ptr::read_and_zero`,
so I added such a function: `ptr::read_and_drop`. But, libraries
should not use it if they can otherwise avoid it.
Fixes to tests to accommodate filling-drop.
The reassignment checker effectively only checks whether the last
assignment in a body affects the discriminant, but it should of course
check all the assignments.
Fixes#23698
The reassignment checker effectively only checks whether the last
assignment in a body affects the discriminant, but it should of course
check all the assignments.
Fixes#23698
After this patch code like `let ref a = *"abcdef"` doesn't cause ICE anymore.
Required for #23121
There are still places in rustc_trans where pointers are always assumed to be thin. In particular, #19064 is not resolved by this patch.
This patch changes the type of byte string literals from `&[u8]` to `&[u8; N]`.
It also implements some necessary traits (`IntoBytes`, `Seek`, `Read`, `BufRead`) for fixed-size arrays (also related to #21725) and adds test for #17233, which seems to be resolved.
Fixes#18465
[breaking-change]