PTX support, take 2 - You can generate PTX using `--emit=asm` and the right (custom) target. Which then you can run on a NVIDIA GPU. - You can compile `core` to PTX. [Xargo] also works and it can compile some other crates like `collections` (but I doubt all of those make sense on a GPU) [Xargo]: https://github.com/japaric/xargo - You can create "global" functions, which can be "called" by the host, using the `"ptx-kernel"` ABI, e.g. `extern "ptx-kernel" fn kernel() { .. }`. Every other function is a "device" function and can only be called by the GPU. - Intrinsics like `__syncthreads()` and `blockIdx.x` are available as `"platform-intrinsics"`. These intrinsics are *not* in the `core` crate but any Rust user can create "bindings" to them using an `extern "platform-intrinsics"` block. See example at the end. - Trying to emit PTX with `-g` (debuginfo); you get an LLVM error. But I don't think PTX can contain debuginfo anyway so `-g` should be ignored and a warning should be printed ("`-g` doesn't work with this target" or something). - "Single source" support. You *can't* write a single source file that contains both host and device code. I think that should be possible to implement that outside the compiler using compiler plugins / build scripts. - The equivalent to CUDA `__shared__` which it's used to declare memory that's shared between the threads of the same block. This could be implemented using attributes: `#[shared] static mut SCRATCH_MEMORY: [f32; 64]` but hasn't been implemented yet. - Built-in targets. This PR doesn't add targets to the compiler just yet but one can create custom targets to be able to emit PTX code (see the example at the end). The idea is to have people experiment with this feature before committing to it (built-in targets are "insta-stable") - All functions must be "inlined". IOW, the `.rlib` must always contain the LLVM bitcode of all the functions of the crate it was produced from. Otherwise, you end with "undefined references" in the final PTX code but you won't get *any* linker error because no linker is involved. IOW, you'll hit a runtime error when loading the PTX into the GPU. The workaround is to use `#[inline]` on non-generic functions and to never use `#[inline(never)]` but this may not always be possible because e.g. you could be relying on third party code. - Should `--emit=asm` generate a `.ptx` file instead of a `.s` file? TL;DR Use Xargo to turn a crate into a PTX module (a `.s` file). Then pass that PTX module, as a string, to the GPU and run it. The full code is in [this repository]. This section gives an overview of how to run Rust code on a NVIDIA GPU. [this repository]: https://github.com/japaric/cuda - Create a custom target. Here's the 64-bit NVPTX target (NOTE: the comments are not valid because this is supposed to be a JSON file; remove them before you use this file): ``` js // nvptx64-nvidia-cuda.json { "arch": "nvptx64", // matches LLVM "cpu": "sm_20", // "oldest" compute capability supported by LLVM "data-layout": "e-i64:64-v16:16-v32:32-n16:32:64", "llvm-target": "nvptx64-nvidia-cuda", "max-atomic-width": 0, // LLVM errors with any other value :-( "os": "cuda", // matches LLVM "panic-strategy": "abort", "target-endian": "little", "target-pointer-width": "64", "target-vendor": "nvidia", // matches LLVM -- not required } ``` (There's a 32-bit target specification in the linked repository) - Write a kernel ``` rust extern "platform-intrinsic" { fn nvptx_block_dim_x() -> i32; fn nvptx_block_idx_x() -> i32; fn nvptx_thread_idx_x() -> i32; } /// Copies an array of `n` floating point numbers from `src` to `dst` pub unsafe extern "ptx-kernel" fn memcpy(dst: *mut f32, src: *const f32, n: usize) { let i = (nvptx_block_dim_x() as isize) .wrapping_mul(nvptx_block_idx_x() as isize) .wrapping_add(nvptx_thread_idx_x() as isize); if (i as usize) < n { *dst.offset(i) = *src.offset(i); } } ``` - Emit PTX code ``` $ xargo rustc --target nvptx64-nvidia-cuda --release -- --emit=asm Compiling core v0.0.0 (file://..) (..) Compiling nvptx-builtins v0.1.0 (https://github.com/japaric/nvptx-builtins) Compiling kernel v0.1.0 $ cat target/nvptx64-nvidia-cuda/release/deps/kernel-*.s // // Generated by LLVM NVPTX Back-End // .version 3.2 .target sm_20 .address_size 64 // .globl memcpy .visible .entry memcpy( .param .u64 memcpy_param_0, .param .u64 memcpy_param_1, .param .u64 memcpy_param_2 ) { .reg .pred %p<2>; .reg .s32 %r<5>; .reg .s64 %rd<12>; ld.param.u64 %rd7, [memcpy_param_2]; mov.u32 %r1, %ntid.x; mov.u32 %r2, %ctaid.x; mul.wide.s32 %rd8, %r2, %r1; mov.u32 %r3, %tid.x; cvt.s64.s32 %rd9, %r3; add.s64 %rd10, %rd9, %rd8; setp.ge.u64 %p1, %rd10, %rd7; @%p1 bra LBB0_2; ld.param.u64 %rd3, [memcpy_param_0]; ld.param.u64 %rd4, [memcpy_param_1]; cvta.to.global.u64 %rd5, %rd4; cvta.to.global.u64 %rd6, %rd3; shl.b64 %rd11, %rd10, 2; add.s64 %rd1, %rd6, %rd11; add.s64 %rd2, %rd5, %rd11; ld.global.u32 %r4, [%rd2]; st.global.u32 [%rd1], %r4; LBB0_2: ret; } ``` - Run it on the GPU ``` rust // `kernel.ptx` is the `*.s` file we got in the previous step const KERNEL: &'static str = include_str!("kernel.ptx"); driver::initialize()?; let device = Device(0)?; let ctx = device.create_context()?; let module = ctx.load_module(KERNEL)?; let kernel = module.function("memcpy")?; let h_a: Vec<f32> = /* create some random data */; let h_b = vec![0.; N]; let d_a = driver::allocate(bytes)?; let d_b = driver::allocate(bytes)?; // Copy from host to GPU driver::copy(h_a, d_a)?; // Run `memcpy` on the GPU kernel.launch(d_b, d_a, N)?; // Copy from GPU to host driver::copy(d_b, h_b)?; // Verify assert_eq!(h_a, h_b); // `d_a`, `d_b`, `h_a`, `h_b` are dropped/freed here ``` --- cc @alexcrichton @brson @rkruppe > What has changed since #34195? - `core` now can be compiled into PTX. Which makes it very easy to turn `no_std` crates into "kernels" with the help of Xargo. - There's now a way, the `"ptx-kernel"` ABI, to generate "global" functions. The old PR required a manual step (it was hack) to "convert" "device" functions into "global" functions. (Only "global" functions can be launched by the host) - Everything is unstable. There are not "insta stable" built-in targets this time (\*). The users have to use a custom target to experiment with this feature. Also, PTX instrinsics, like `__syncthreads` and `blockIdx.x`, are now implemented as `"platform-intrinsics"` so they no longer live in the `core` crate. (\*) I'd actually like to have in-tree targets because that makes this target more discoverable, removes the need to lug around .json files, etc. However, bundling a target with the compiler immediately puts it in the path towards stabilization. Which gives us just two cycles to find and fix any problem with the target specification. Afterwards, it becomes hard to tweak the specification because that could be a breaking change. A possible solution could be "unstable built-in targets". Basically, to use an unstable target, you'll have to also pass `-Z unstable-options` to the compiler. And unstable targets, being unstable, wouldn't be available on stable. > Why should this be merged? - To let people experiment with the feature out of tree. Having easy access to the feature (in every nightly) allows this. I also think that, as it is, it should be possible to start prototyping type-safe single source support using build scripts, macros and/or plugins. - It's a straightforward implementation. No different that adding support for any other architecture.
The Rust Programming Language
This is the main source code repository for Rust. It contains the compiler, standard library, and documentation.
Quick Start
Read "Installing Rust" from The Book.
Building from Source
-
Make sure you have installed the dependencies:
g++
4.7 or later orclang++
3.xpython
2.7 (but not 3.x)- GNU
make
3.81 or later cmake
3.4.3 or latercurl
git
-
Clone the source with
git
:$ git clone https://github.com/rust-lang/rust.git $ cd rust
-
Build and install:
$ ./configure $ make && sudo make install
Note: Install locations can be adjusted by passing a
--prefix
argument toconfigure
. Various other options are also supported – pass--help
for more information on them.When complete,
sudo make install
will place several programs into/usr/local/bin
:rustc
, the Rust compiler, andrustdoc
, the API-documentation tool. This install does not include Cargo, Rust's package manager, which you may also want to build.
Building on Windows
There are two prominent ABIs in use on Windows: the native (MSVC) ABI used by Visual Studio, and the GNU ABI used by the GCC toolchain. Which version of Rust you need depends largely on what C/C++ libraries you want to interoperate with: for interop with software produced by Visual Studio use the MSVC build of Rust; for interop with GNU software built using the MinGW/MSYS2 toolchain use the GNU build.
MinGW
MSYS2 can be used to easily build Rust on Windows:
-
Grab the latest MSYS2 installer and go through the installer.
-
Run
mingw32_shell.bat
ormingw64_shell.bat
from wherever you installed MSYS2 (i.e.C:\msys64
), depending on whether you want 32-bit or 64-bit Rust. (As of the latest version of MSYS2 you have to runmsys2_shell.cmd -mingw32
ormsys2_shell.cmd -mingw64
from the command line instead) -
From this terminal, install the required tools:
# Update package mirrors (may be needed if you have a fresh install of MSYS2) $ pacman -Sy pacman-mirrors # Install build tools needed for Rust. If you're building a 32-bit compiler, # then replace "x86_64" below with "i686". If you've already got git, python, # or CMake installed and in PATH you can remove them from this list. Note # that it is important that you do **not** use the 'python2' and 'cmake' # packages from the 'msys2' subsystem. The build has historically been known # to fail with these packages. $ pacman -S git \ make \ diffutils \ tar \ mingw-w64-x86_64-python2 \ mingw-w64-x86_64-cmake \ mingw-w64-x86_64-gcc
-
Navigate to Rust's source code (or clone it), then configure and build it:
$ ./configure $ make && make install
MSVC
MSVC builds of Rust additionally require an installation of Visual Studio 2013
(or later) so rustc
can use its linker. Make sure to check the “C++ tools”
option.
With these dependencies installed, you can build the compiler in a cmd.exe
shell with:
> python x.py build
If you're running inside of an msys shell, however, you can run:
$ ./configure --build=x86_64-pc-windows-msvc
$ make && make install
Currently building Rust only works with some known versions of Visual Studio. If you have a more recent version installed the build system doesn't understand then you may need to force rustbuild to use an older version. This can be done by manually calling the appropriate vcvars file before running the bootstrap.
CALL "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat"
python x.py build
Building Documentation
If you’d like to build the documentation, it’s almost the same:
$ ./configure
$ make docs
The generated documentation will appear in a top-level doc
directory,
created by the make
rule.
Notes
Since the Rust compiler is written in Rust, it must be built by a precompiled "snapshot" version of itself (made in an earlier state of development). As such, source builds require a connection to the Internet, to fetch snapshots, and an OS that can execute the available snapshot binaries.
Snapshot binaries are currently built and tested on several platforms:
Platform / Architecture | x86 | x86_64 |
---|---|---|
Windows (7, 8, Server 2008 R2) | ✓ | ✓ |
Linux (2.6.18 or later) | ✓ | ✓ |
OSX (10.7 Lion or later) | ✓ | ✓ |
You may find that other platforms work, but these are our officially supported build environments that are most likely to work.
Rust currently needs between 600MiB and 1.5GiB to build, depending on platform. If it hits swap, it will take a very long time to build.
There is more advice about hacking on Rust in CONTRIBUTING.md.
Getting Help
The Rust community congregates in a few places:
- Stack Overflow - Direct questions about using the language.
- users.rust-lang.org - General discussion and broader questions.
- /r/rust - News and general discussion.
Contributing
To contribute to Rust, please see CONTRIBUTING.
Rust has an IRC culture and most real-time collaboration happens in a variety of channels on Mozilla's IRC network, irc.mozilla.org. The most popular channel is #rust, a venue for general discussion about Rust. And a good place to ask for help would be #rust-beginners.
License
Rust is primarily distributed under the terms of both the MIT license and the Apache License (Version 2.0), with portions covered by various BSD-like licenses.
See LICENSE-APACHE, LICENSE-MIT, and COPYRIGHT for details.