Commit Graph

90 Commits

Author SHA1 Message Date
Jan-Erik Rediger
deafab19be [LLVM-3.9] Increase PIELevel
Previously, we had a PositionIndependentExecutable, now we simply
choose the highest level. This should be equivalent.

🍰
2016-07-29 10:29:44 +02:00
Jan-Erik Rediger
9e706f90cb [LLVM-3.9] Configure PIE at the module level instead of compilation unit level
This was deleted here[1] which appears to be replaced by this[2]
which is a new setPIELevel function on the LLVM module itself.

[1]: http://reviews.llvm.org/D19753
[2]: http://reviews.llvm.org/D19671
2016-07-29 10:29:44 +02:00
Jan-Erik Rediger
5b44e10fb7 [LLVM-3.9] Preserve certain functions when internalizing
This makes sure to still use the old way for older LLVM versions.
2016-07-29 10:29:44 +02:00
Jan-Erik Rediger
6ed5db8d35 [LLVM-3.9] Specify that we are using the legacy interface
LLVM pass manager infrastructure is currently getting rewritten to be
more flexible, but the rewrite isn't complete yet.
2016-07-29 10:29:44 +02:00
Cameron Hart
ce50bedd8c Pass -DLLVM_RUSTLLVM to compile against rust llvm fork.
If using system llvm don't try use modifications made in the fork.
2016-07-24 19:49:10 +10:00
Cameron Hart
e1efa324ec Add help for target CPUs, features, relocation and code models. 2016-07-11 00:22:13 +10:00
Jake Goulding
4f01329e0e Reflect supporting only LLVM 3.7+ in the LLVM wrappers 2016-06-09 15:59:26 -04:00
Andrea Canciani
c883463e94 Implement feature extraction from TargetMachine
Add the `LLVMRustHasFeature` function to check whether a
`TargetMachine` has a given feature.
2016-04-09 00:39:04 +02:00
Corey Farwell
d9426210b1 Register LLVM passes with the correct LLVM pass manager.
LLVM was upgraded to a new version in this commit:

f9d4149c29

which was part of this pull request:

https://github.com/rust-lang/rust/issues/26025

Consider the following two lines from that commit:

f9d4149c29 (diff-a3b24dbe2ea7c1981f9ac79f9745f40aL462)

f9d4149c29 (diff-a3b24dbe2ea7c1981f9ac79f9745f40aL469)

The purpose of these lines is to register LLVM passes. Prior to the that
commit, the passes being handled were assumed to be ModulePasses (a
specific type of LLVM pass) since they were being added to a ModulePass
manager. After that commit, both lines were refactored (presumably in an
attempt to DRY out the code), but the ModulePasses were changed to be
registered to a FunctionPass manager. This change resulted in
ModulePasses being run, but a Function object was being passed as a
parameter to the pass instead of a Module, which resulted in
segmentation faults.

In this commit, I changed relevant sections of the code to check the
type of the passes being added and register them to the appropriate pass
manager.

Closes https://github.com/rust-lang/rust/issues/31067
2016-01-25 00:15:39 -05:00
Brian Anderson
99741700e5 Remove segmented stack option from LLVMRustCreateTargetMachine. Unused. 2015-11-19 16:58:23 -08:00
Seo Sanghyeon
b285f92025 rustllvm: Update to LLVM trunk 2015-10-24 18:42:23 +09:00
Seo Sanghyeon
22dc408217 Avoid using getDataLayout, deprecated in LLVM 3.7 2015-10-13 15:11:59 +09:00
Cristi Cobzarenco
4b308b44e1 typos: fix a grabbag of typos all over the place 2015-10-08 19:49:31 +01:00
Alex Crichton
958d563825 trans: Clean up handling the LLVM data layout
Turns out for OSX our data layout was subtly wrong and the LLVM update must have
exposed this. Instead of fixing this I've removed all data layouts from the
compiler to just use the defaults that LLVM provides for all targets. All data
layouts (and a number of dead modules) are removed from the compiler here.
Custom target specifications can still provide a custom data layout, but it is
now an optional key as the default will be used if one isn't specified.
2015-07-16 20:25:52 -07:00
Alex Crichton
f9d4149c29 rustc: Update LLVM
This commit updates the LLVM submodule in use to the current HEAD of the LLVM
repository. This is primarily being done to start picking up unwinding support
for MSVC, which is currently unimplemented in the revision of LLVM we are using.
Along the way a few changes had to be made:

* As usual, lots of C++ debuginfo bindings in LLVM changed, so there were some
  significant changes to our RustWrapper.cpp
* As usual, some pass management changed in LLVM, so clang was re-scrutinized to
  ensure that we're doing the same thing as clang.
* Some optimization options are now passed directly into the
  `PassManagerBuilder` instead of through CLI switches to LLVM.
* The `NoFramePointerElim` option was removed from LLVM, favoring instead the
  `no-frame-pointer-elim` function attribute instead.

Additionally, LLVM has picked up some new optimizations which required fixing an
existing soundness hole in the IR we generate. It appears that the current LLVM
we use does not expose this hole. When an enum is moved, the previous slot in
memory is overwritten with a bit pattern corresponding to "dropped". When the
drop glue for this slot is run, however, the switch on the discriminant can
often start executing the `unreachable` block of the switch due to the
discriminant now being outside the normal range. This was patched over locally
for now by having the `unreachable` block just change to a `ret void`.
2015-06-16 22:56:42 -07:00
Tamir Duberstein
ba276adab5 LLVM < 3.5 is unsupported since bb18a3c 2015-04-21 07:20:48 -07:00
Julian Orth
660b48fae5 Add support for target-cpu=native 2015-03-10 01:56:51 +01:00
Daniel Micay
4deb4bcba5 optimize position independent code in executables
Position independent code has fewer requirements in executables, so pass
the appropriate flag to LLVM in order to allow more optimization. At the
moment this means faster thread-local storage.
2014-10-12 09:18:14 -04:00
Luqman Aden
4b22178d32 Update LLVM. 2014-10-04 13:28:57 -04:00
Luqman Aden
90eeb92e10 Update to LLVM head and mark various ptrs as nonnull. 2014-05-22 21:06:24 -04:00
Alex Crichton
a7bee7b05d Add a crate for missing stubs from libcore
The core library in theory has 0 dependencies, but in practice it has some in
order for it to be efficient. These dependencies are in the form of the basic
memory operations provided by libc traditionally, such as memset, memcmp, etc.
These functions are trivial to implement and themselves have 0 dependencies.

This commit adds a new crate, librlibc, which will serve the purpose of
providing these dependencies. The crate is never linked to by default, but is
available to be linked to by downstream consumers. Normally these functions are
provided by the system libc, but in other freestanding contexts a libc may not
be available. In these cases, librlibc will suffice for enabling execution with
libcore.

cc #10116
2014-05-15 13:50:37 -07:00
Alex Crichton
58ab4a0064 rustc: Enable -f{function,data}-sections
The compiler has previously been producing binaries on the order of 1.8MB for
hello world programs "fn main() {}". This is largely a result of the compilation
model used by compiling entire libraries into a single object file and because
static linking is favored by default.

When linking, linkers will pull in the entire contents of an object file if any
symbol from the object file is used. This means that if any symbol from a rust
library is used, the entire library is pulled in unconditionally, regardless of
whether the library is used or not.

Traditional C/C++ projects do not normally encounter these large executable
problems because their archives (rust's rlibs) are composed of many objects.
Because of this, linkers can eliminate entire objects from being in the final
executable. With rustc, however, the linker does not have the opportunity to
leave out entire object files.

In order to get similar benefits from dead code stripping at link time, this
commit enables the -ffunction-sections and -fdata-sections flags in LLVM, as
well as passing --gc-sections to the linker *by default*. This means that each
function and each global will be placed into its own section, allowing the
linker to GC all unused functions and data symbols.

By enabling these flags, rust is able to generate much smaller binaries default.
On linux, a hello world binary went from 1.8MB to 597K (a 67% reduction in
size). The output size of dynamic libraries remained constant, but the output
size of rlibs increased, as seen below:

    libarena         -  2.27% bigger (   292872 =>    299508)
    libcollections   -  0.64% bigger (  6765884 =>   6809076)
    libflate         -  0.83% bigger (   186516 =>    188060)
    libfourcc        - 14.71% bigger (   307290 =>    352498)
    libgetopts       -  4.42% bigger (   761468 =>    795102)
    libglob          -  2.73% bigger (   899932 =>    924542)
    libgreen         -  9.63% bigger (  1281718 =>   1405124)
    libhexfloat      - 13.88% bigger (   333738 =>    380060)
    liblibc          - 10.79% bigger (   551280 =>    610736)
    liblog           - 10.93% bigger (   218208 =>    242060)
    libnative        -  8.26% bigger (  1362096 =>   1474658)
    libnum           -  2.34% bigger (  2583400 =>   2643916)
    librand          -  1.72% bigger (  1608684 =>   1636394)
    libregex         -  6.50% bigger (  1747768 =>   1861398)
    librustc         -  4.21% bigger (151820192 => 158218924)
    librustdoc       -  8.96% bigger ( 13142604 =>  14320544)
    librustuv        -  4.13% bigger (  4366896 =>   4547304)
    libsemver        -  2.66% bigger (   396166 =>    406686)
    libserialize     -  1.91% bigger (  6878396 =>   7009822)
    libstd           -  3.59% bigger ( 39485286 =>  40902218)
    libsync          -  3.95% bigger (  1386390 =>   1441204)
    libsyntax        -  4.96% bigger ( 35757202 =>  37530798)
    libterm          - 13.99% bigger (   924580 =>   1053902)
    libtest          -  6.04% bigger (  2455720 =>   2604092)
    libtime          -  2.84% bigger (  1075708 =>   1106242)
    liburl           -  6.53% bigger (   590458 =>    629004)
    libuuid          -  4.63% bigger (   326350 =>    341466)
    libworkcache     -  8.45% bigger (  1230702 =>   1334750)

This increase in size is a result of encoding many more section names into each
object file (rlib). These increases are moderate enough that this change seems
worthwhile to me, due to the drastic improvements seen in the final artifacts.
The overall increase of the stage2 target folder (not the size of an install)
went from 337MB to 348MB (3% increase).

Additionally, linking is generally slower when executed with all these new
sections plus the --gc-sections flag. The stage0 compiler takes 1.4s to link the
`rustc` binary, where the stage1 compiler takes 1.9s to link the binary. Three
megabytes are shaved off the binary. I found this increase in link time to be
acceptable relative to the benefits of code size gained.

This commit only enables --gc-sections for *executables*, not dynamic libraries.
LLVM does all the heavy lifting when producing an object file for a dynamic
library, so there is little else for the linker to do (remember that we only
have one object file).

I conducted similar experiments by putting a *module's* functions and data
symbols into its own section (granularity moved to a module level instead of a
function/static level). The size benefits of a hello world were seen to be on
the order of 400K rather than 1.2MB. It seemed that enough benefit was gained
using ffunction-sections that this route was less desirable, despite the lesser
increases in binary rlib size.
2014-04-29 10:29:00 -07:00
Alex Crichton
de7845ac72 rustc: Fix passing errors from LLVM to rustc
Many of the instances of setting a global error variable ended up leaving a
dangling pointer into free'd memory. This changes the method of error
transmission to strdup any error and "relinquish ownership" to rustc when it
gets an error. The corresponding Rust code will then free the error as
necessary.

Closes #12865
2014-04-23 10:04:29 -07:00
Alex Crichton
30ff17f809 Upgrade LLVM
This comes with a number of fixes to be compatible with upstream LLVM:

* Previously all monomorphizations of "mem::size_of()" would receive the same
  symbol. In the past LLVM would silently rename duplicated symbols, but it
  appears to now be dropping the duplicate symbols and functions now. The symbol
  names of monomorphized functions are now no longer solely based on the type of
  the function, but rather the type and the unique hash for the
  monomorphization.

* Split stacks are no longer a global feature controlled by a flag in LLVM.
  Instead, they are opt-in on a per-function basis through a function attribute.
  The rust #[no_split_stack] attribute will disable this, otherwise all
  functions have #[split_stack] attached to them.

* The compare and swap instruction now takes two atomic orderings, one for the
  successful case and one for the failure case. LLVM internally has an
  implementation of calculating the appropriate failure ordering given a
  particular success ordering (previously only a success ordering was
  specified), and I copied that into the intrinsic translation so the failure
  ordering isn't supplied on a source level for now.

* Minor tweaks to LLVM's API in terms of debuginfo, naming, c++11 conventions,
  etc.
2014-04-17 11:11:39 -07:00
Alex Crichton
1ac5e84e91 rustc: Get rustc compiling with LLVM 3.{3,4} again
The travis builds have been breaking recently because LLVM 3.5 upstream is
changing. This looks like it's likely to continue, so it would be more useful
for us if we could lock ourselves to a system LLVM version that is not changing.

This commit has the support to bring our C++ glue to LLVM back in line with what
was possible back in LLVM 3.{3,4}. I don't think we're going to be able to
reasonably protect against regressions in the future, but this kind of code is a
good sign that we can continue to use the system LLVM for simple-ish things.
Codegen for ARM won't work and it won't have some of the perf improvements we
have, but using the system LLVM should work well enough for development.
2014-02-26 15:01:15 -08:00
Alex Crichton
294b27d806 Update LLVM
Upstream LLVM has changed slightly such that our PassWrapper.cpp no longer
comiles (travis errors). This updates the bundled LLVM to the latest nightly
which will hopefully fix the travis errors we're seeing.
2014-02-25 09:37:30 -08:00
bors
e3b1f3c443 auto merge of #11853 : alexcrichton/rust/up-llvm, r=brson
This upgrade brings commit by @eddyb to help optimizations of virtual calls in
a few places (https://github.com/llvm-mirror/llvm/commit/6d2bd95) as well as a
commit by @c-a to *greatly* improve the runtime of the optimization passes
(https://github.com/rust-lang/llvm/pull/3).

Nice work to these guys!
2014-01-29 23:46:26 -08:00
Alex Crichton
8cd935f52a Upgrade LLVM
This upgrade brings commit by @eddyb to help optimizations of virtual calls in
a few places (https://github.com/llvm-mirror/llvm/commit/6d2bd95) as well as a
commit by @c-a to *greatly* improve the runtime of the optimization passes
(https://github.com/rust-lang/llvm/pull/3).

Nice work to these guys!
2014-01-29 23:43:39 -08:00
Daniel Micay
cb263e875e enable fp-elim when debug info is disabled
This can almost be fully disabled, as it no longer breaks retrieving a
backtrace on OS X as verified by @alexcrichton. However, it still
breaks retrieving the values of parameters. This should be fixable in
the future via a proper location list...

Closes #7477
2014-01-29 16:35:05 -05:00
Luqman Aden
d9eaeda21c Update llvm. 2013-12-29 23:38:43 -05:00
Alex Crichton
667d114f47 Disable all unwinding on -Z no-landing-pads LTO
When performing LTO, the rust compiler has an opportunity to completely strip
all landing pads in all dependent libraries. I've modified the LTO pass to
recognize the -Z no-landing-pads option when also running an LTO pass to flag
everything in LLVM as nothrow. I've verified that this prevents any and all
invoke instructions from being emitted.

I believe that this is one of our best options for moving forward with
accomodating use-cases where unwinding doesn't really make sense. This will
allow libraries to be built with landing pads by default but allow usage of them
in contexts where landing pads aren't necessary.

cc #10780
2013-12-11 09:18:20 -08:00
Alex Crichton
fce4a174b9 Implement LTO
This commit implements LTO for rust leveraging LLVM's passes. What this means
is:

* When compiling an rlib, in addition to insdering foo.o into the archive, also
  insert foo.bc (the LLVM bytecode) of the optimized module.

* When the compiler detects the -Z lto option, it will attempt to perform LTO on
  a staticlib or binary output. The compiler will emit an error if a dylib or
  rlib output is being generated.

* The actual act of performing LTO is as follows:

    1. Force all upstream libraries to have an rlib version available.
    2. Load the bytecode of each upstream library from the rlib.
    3. Link all this bytecode into the current LLVM module (just using llvm
       apis)
    4. Run an internalization pass which internalizes all symbols except those
       found reachable for the local crate of compilation.
    5. Run the LLVM LTO pass manager over this entire module

    6a. If assembling an archive, then add all upstream rlibs into the output
        archive. This ignores all of the object/bitcode/metadata files rust
        generated and placed inside the rlibs.
    6b. If linking a binary, create copies of all upstream rlibs, remove the
        rust-generated object-file, and then link everything as usual.

As I have explained in #10741, this process is excruciatingly slow, so this is
*not* turned on by default, and it is also why I have decided to hide it behind
a -Z flag for now. The good news is that the binary sizes are about as small as
they can be as a result of LTO, so it's definitely working.

Closes #10741
Closes #10740
2013-12-09 14:41:49 -08:00
Alex Crichton
6b34ba242d Update LLVM and jettison jit support
LLVM's JIT has been updated numerous times, and we haven't been tracking it at
all. The existing LLVM glue code no longer compiles, and the JIT isn't used for
anything currently.

This also rebases out the FixedStackSegment support which we have added to LLVM.
None of this is still in use by the compiler, and there's no need to keep this
functionality around inside of LLVM.

This is needed to unblock #10708 (where we're tripping an LLVM assertion).
2013-12-05 09:15:54 -08:00
Jyun-Yan You
350b5438cd add -Z soft-float option
This change adds -Z soft-float option for generating
software floating point library calls.
It also implies using soft float ABI, that is the same as llc.

It is useful for targets that have no FPU.
2013-10-01 11:19:18 +08:00
Alex Crichton
8d12673c82 Tweak pass management and add some more options
The only changes to the default passes is that O1 now doesn't run the inline
pass, just always-inline with lifetime intrinsics. O2 also now has a threshold
of 225 instead of 275. Otherwise the default passes being run is the same.

I've also added a few more options for configuring the pass pipeline. Namely you
can now specify arguments to LLVM directly via the `--llvm-args` command line
option which operates similarly to `--passes`. I also added the ability to turn
off pre-population of the pass manager in case you want to run *only* your own
passes.
2013-08-30 17:56:04 -07:00
Brian Anderson
3801534d10 Revert "auto merge of #8695 : thestinger/rust/build, r=pcwalton"
This reverts commit 2c0f9bd354, reversing
changes made to f8c4f0ea9c.

Conflicts:
	src/rustllvm/RustWrapper.cpp
2013-08-28 19:59:52 -07:00
Alex Crichton
73540551e5 Rewrite pass management with LLVM
Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:

* Passes are no longer explicitly added one by one. This would be difficult to
  keep up with as LLVM changes and we don't guaranteed always know the best
  order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
  to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
  what clang does, and I presume that we *may* see a speed boost from the
  module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
  to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
  analysis passes

Some new features include:

* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
  default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
  a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
  disabled with `-Z no-vectorize-loops`
2013-08-26 20:11:51 -07:00
Brian Anderson
b7a6919899 rustc: Dispose of LLVM passes in test cases 2013-06-19 15:18:25 -07:00
James Miller
faf1afee16 Further refactor optimization pass handling
This refactors pass handling to use the argument names, so it can be used
in a similar manner to `opt`. This may be slightly less efficient than the
previous version, but it is much easier to maintain.

It also adds in the ability to specify a custom pipeline on the command
line, this overrides the normal passes, however. This should completely
close #2396.
2013-05-29 20:08:20 +12:00
James Miller
d694e283b3 Refactor optimization pass handling.
Refactor the optimization passes to explicitly use the passes. This commit
just re-implements the same passes as were already being run.

It also adds an option (behind `-Z`) to run the LLVM lint pass on the
unoptimized IR.
2013-05-29 14:16:49 +12:00