This adds support for lint groups to the compiler. Lint groups are a way of
grouping a number of lints together under one name. For example, this also
defines a default lint for naming conventions, named `bad_style`. Writing
`#[allow(bad_style)]` is equivalent to writing
`#[allow(non_camel_case_types, non_snake_case, non_uppercase_statics)]`. These
lint groups can also be defined as a compiler plugin using the new
`Registry::register_lint_group` method.
This also adds two built-in lint groups, `bad_style` and `unused`. The contents
of these groups can be seen by running `rustc -W help`.
This unifies the `non_snake_case_functions` and `uppercase_variables` lints
into one lint, `non_snake_case`. It also now checks for non-snake-case modules.
This also extends the non-camel-case types lint to check type parameters, and
merges the `non_uppercase_pattern_statics` lint into the
`non_uppercase_statics` lint.
Because the `uppercase_variables` lint is now part of the `non_snake_case`
lint, all non-snake-case variables that start with lowercase characters (such
as `fooBar`) will now trigger the `non_snake_case` lint.
New code should be updated to use the new `non_snake_case` lint instead of the
previous `non_snake_case_functions` and `uppercase_variables` lints. All use of
the `non_uppercase_pattern_statics` should be replaced with the
`non_uppercase_statics` lint. Any code that previously contained non-snake-case
module or variable names should be updated to use snake case names or disable
the `non_snake_case` lint. Any code with non-camel-case type parameters should
be changed to use camel case or disable the `non_camel_case_types` lint.
[breaking-change]
Per API meeting
https://github.com/rust-lang/meeting-minutes/blob/master/Meeting-API-review-2014-08-13.md
# Changes to `core::option`
Most of the module is marked as stable or unstable; most of the unstable items are awaiting resolution of conventions issues.
However, a few methods have been deprecated, either due to lack of use or redundancy:
* `take_unwrap`, `get_ref` and `get_mut_ref` (redundant, and we prefer for this functionality to go through an explicit .unwrap)
* `filtered` and `while`
* `mutate` and `mutate_or_set`
* `collect`: this functionality is being moved to a new `FromIterator` impl.
# Changes to `core::result`
Most of the module is marked as stable or unstable; most of the unstable items are awaiting resolution of conventions issues.
* `collect`: this functionality is being moved to a new `FromIterator` impl.
* `fold_` is deprecated due to lack of use
* Several methods found in `core::option` are added here, including `iter`, `as_slice`, and variants.
Due to deprecations, this is a:
[breaking-change]
This commits implements {Tcp,Unix}Acceptor::{clone,close_accept} methods for
unix. A windows implementation is coming in a later commit.
The clone implementation is based on atomic reference counting (as with all
other clones), and the close_accept implementation is based on selecting on a
self-pipe which signals that a close has been seen.
As of RFC 18, struct layout is undefined. Opting into a C-compatible struct
layout is now down with #[repr(C)]. For consistency, specifying a packed
layout is now also down with #[repr(packed)]. Both can be specified.
To fix errors caused by this, just add #[repr(C)] to the structs, and change
#[packed] to #[repr(packed)]
Closes#14309
[breaking-change]
declared with the same name in the same scope.
This breaks several common patterns. First are unused imports:
use foo::bar;
use baz::bar;
Change this code to the following:
use baz::bar;
Second, this patch breaks globs that import names that are shadowed by
subsequent imports. For example:
use foo::*; // including `bar`
use baz::bar;
Change this code to remove the glob:
use foo::{boo, quux};
use baz::bar;
Or qualify all uses of `bar`:
use foo::{boo, quux};
use baz;
... baz::bar ...
Finally, this patch breaks code that, at top level, explicitly imports
`std` and doesn't disable the prelude.
extern crate std;
Because the prelude imports `std` implicitly, there is no need to
explicitly import it; just remove such directives.
The old behavior can be opted into via the `import_shadowing` feature
gate. Use of this feature gate is discouraged.
This implements RFC #116.
Closes#16464.
[breaking-change]
This leaves the `Share` trait at `std::kinds` via a `#[deprecated]` `pub use`
statement, but the `NoShare` struct is no longer part of `std::kinds::marker`
due to #12660 (the build cannot bootstrap otherwise).
All code referencing the `Share` trait should now reference the `Sync` trait,
and all code referencing the `NoShare` type should now reference the `NoSync`
type. The functionality and meaning of this trait have not changed, only the
naming.
Closes#16281
[breaking-change]
This leaves the `Share` trait at `std::kinds` via a `#[deprecated]` `pub use`
statement, but the `NoShare` struct is no longer part of `std::kinds::marker`
due to #12660 (the build cannot bootstrap otherwise).
All code referencing the `Share` trait should now reference the `Sync` trait,
and all code referencing the `NoShare` type should now reference the `NoSync`
type. The functionality and meaning of this trait have not changed, only the
naming.
Closes#16281
[breaking-change]
This commit stabilizes the `std::sync::atomics` module, renaming it to
`std::sync::atomic` to match library precedent elsewhere, and tightening
up behavior around incorrect memory ordering annotations.
The vast majority of the module is now `stable`. However, the
`AtomicOption` type has been deprecated, since it is essentially unused
and is not truly a primitive atomic type. It will eventually be replaced
by a higher-level abstraction like MVars.
Due to deprecations, this is a:
[breaking-change]
The original trick used to trigger unwinds would not work with GCC's implementation of SEH, so I had to invent a new one: rust_try now consists of two routines: the outer one, whose handler triggers unwinds, and the inner one, that stops unwinds by having a landing pad that swallows exceptions and passes them on to the outer routine via a normal return.
This commit stabilizes the `std::sync::atomics` module, renaming it to
`std::sync::atomic` to match library precedent elsewhere, and tightening
up behavior around incorrect memory ordering annotations.
The vast majority of the module is now `stable`. However, the
`AtomicOption` type has been deprecated, since it is essentially unused
and is not truly a primitive atomic type. It will eventually be replaced
by a higher-level abstraction like MVars.
Due to deprecations, this is a:
[breaking-change]
As discovered in #15460, a particular #[link(kind = "static", ...)] line is not
actually guaranteed to link the library at all. The reason for this is that if
the external library doesn't have any referenced symbols in the object generated
by rustc, the entire library is dropped by the linker.
For dynamic native libraries, this is solved by passing -lfoo for all downstream
compilations unconditionally. For static libraries in rlibs this is solved
because the entire archive is bundled in the rlib. The only situation in which
this was a problem was when a static native library was linked to a rust dynamic
library.
This commit brings the behavior of dylibs in line with rlibs by passing the
--whole-archive flag to the linker when linking native libraries. On OSX, this
uses the -force_load flag. This flag ensures that the entire archive is
considered candidate for being linked into the final dynamic library.
This is a breaking change because if any static library is included twice in the
same compilation unit then the linker will start emitting errors about duplicate
definitions now. The fix for this would involve only statically linking to a
library once.
Closes#15460
[breaking-change]
This was motivated by a desire to remove allocation in the common
pattern of
let old = key.replace(None)
do_something();
key.replace(old);
This also switched the map representation from a Vec to a TreeMap. A Vec
may be reasonable if there's only a couple TLD keys, but a TreeMap
provides better behavior as the number of keys increases.
Like the Vec, this TreeMap implementation does not shrink the container
when a value is removed. Unlike Vec, this TreeMap implementation cannot
reuse an empty node for a different key. Therefore any key that has been
inserted into the TLD at least once will continue to take up space in
the Map until the task ends. The expectation is that the majority of
keys that are inserted into TLD will be expected to have a value for
most of the rest of the task's lifetime. If this assumption is wrong,
there are two reasonable ways to fix this that could be implemented in
the future:
1. Provide an API call to either remove a specific key from the TLD and
destruct its node (e.g. `remove()`), or instead to explicitly clean
up all currently-empty nodes in the map (e.g. `compact()`). This is
simple, but requires the user to explicitly call it.
2. Keep track of the number of empty nodes in the map and when the map
is mutated (via `replace()`), if the number of empty nodes passes
some threshold, compact it automatically. Alternatively, whenever a
new key is inserted that hasn't been used before, compact the map at
that point.
---
Benchmarks:
I ran 3 benchmarks. tld_replace_none just replaces the tld key with None
repeatedly. tld_replace_some replaces it with Some repeatedly. And
tld_replace_none_some simulates the common behavior of replacing with
None, then replacing with the previous value again (which was a Some).
Old implementation:
test tld_replace_none ... bench: 20 ns/iter (+/- 0)
test tld_replace_none_some ... bench: 77 ns/iter (+/- 4)
test tld_replace_some ... bench: 57 ns/iter (+/- 2)
New implementation:
test tld_replace_none ... bench: 11 ns/iter (+/- 0)
test tld_replace_none_some ... bench: 23 ns/iter (+/- 0)
test tld_replace_some ... bench: 12 ns/iter (+/- 0)
This was motivated by a desire to remove allocation in the common
pattern of
let old = key.replace(None)
do_something();
key.replace(old);
This also switched the map representation from a Vec to a TreeMap. A Vec
may be reasonable if there's only a couple TLD keys, but a TreeMap
provides better behavior as the number of keys increases.
Like the Vec, this TreeMap implementation does not shrink the container
when a value is removed. Unlike Vec, this TreeMap implementation cannot
reuse an empty node for a different key. Therefore any key that has been
inserted into the TLD at least once will continue to take up space in
the Map until the task ends. The expectation is that the majority of
keys that are inserted into TLD will be expected to have a value for
most of the rest of the task's lifetime. If this assumption is wrong,
there are two reasonable ways to fix this that could be implemented in
the future:
1. Provide an API call to either remove a specific key from the TLD and
destruct its node (e.g. `remove()`), or instead to explicitly clean
up all currently-empty nodes in the map (e.g. `compact()`). This is
simple, but requires the user to explicitly call it.
2. Keep track of the number of empty nodes in the map and when the map
is mutated (via `replace()`), if the number of empty nodes passes
some threshold, compact it automatically. Alternatively, whenever a
new key is inserted that hasn't been used before, compact the map at
that point.
---
Benchmarks:
I ran 3 benchmarks. tld_replace_none just replaces the tld key with None
repeatedly. tld_replace_some replaces it with Some repeatedly. And
tld_replace_none_some simulates the common behavior of replacing with
None, then replacing with the previous value again (which was a Some).
Old implementation:
test tld_replace_none ... bench: 20 ns/iter (+/- 0)
test tld_replace_none_some ... bench: 77 ns/iter (+/- 4)
test tld_replace_some ... bench: 57 ns/iter (+/- 2)
New implementation:
test tld_replace_none ... bench: 11 ns/iter (+/- 0)
test tld_replace_none_some ... bench: 23 ns/iter (+/- 0)
test tld_replace_some ... bench: 12 ns/iter (+/- 0)
Closes#16097 (fix variable name in tutorial)
Closes#16100 (More defailbloating)
Closes#16104 (Fix deprecation commment on `core::cmp::lexical_ordering`)
Closes#16105 (fix formatting in pointer guide table)
Closes#16107 (remove serialize::ebml, add librbml)
Closes#16108 (Fix heading levels in pointer guide)
Closes#16109 (rustrt: Don't conditionally init the at_exit QUEUE)
Closes#16111 (hexfloat: Deprecate to move out of the repo)
Closes#16113 (Add examples for GenericPath methods.)
Closes#16115 (Byte literals!)
Closes#16116 (Add a non-regression test for issue #8372)
Closes#16120 (Deprecate semver)
Closes#16124 (Deprecate uuid)
Closes#16126 (Deprecate fourcc)
Closes#16127 (Remove incorrect example)
Closes#16129 (Add note about production deployments.)
Closes#16131 (librustc: Don't ICE when trying to subst regions in destructor call.)
Closes#16133 (librustc: Don't ICE with struct exprs where the name is not a valid struct.)
Closes#16136 (Implement slice::Vector for Option<T> and CVec<T>)
Closes#16137 (alloc, arena, test, url, uuid: Elide lifetimes.)
Not included are two required patches:
* LLVM: segmented stack support for DragonFly [1]
* jemalloc: simple configure patches
[1]: http://reviews.llvm.org/D4705
The correct terminology is Task-Local Data, or TLD. Task-Local Storage,
or TLS, is the old terminology that was abandoned because of the
confusion with Thread-Local Storage (TLS).
Like with libnative, when a green task failed to spawn it would leave the world
in a corrupt state where the local scheduler had been dropped as well as the
local task. Also like libnative, this patch sets up a "bomb" which when it goes
off will restore the state of the world.
Previously, the call to bookkeeping::increment() was never paired with a
decrement when the spawn failed (due to unwinding). This fixes the problem by
returning a "bomb" from increment() which will decrement on drop, and then
moving the bomb into the child task's procedure which will be dropped naturally.
When a new task fails to spawn, it triggers a task failure of the spawning task.
This ends up causing runtime aborts today because of the destructor bomb in the
Task structure. The bomb doesn't actually need to go off until *after* the task
has run at least once.
This now prevents a runtime abort when a native thread fails to spawn.
Not included are two required patches:
* LLVM: segmented stack support for DragonFly [1]
* jemalloc: simple configure patches
[1]: http://reviews.llvm.org/D4705