Previously on Windows a directory junction would return false from `is_dir`,
causing various odd behavior, specifically calls to `create_dir_all` might fail
when they would otherwise continue to succeed.
Closes#26716
This makes `Debug` for `File` show the file path and access mode of the file on OS X, just like on Linux.
I'd be happy about any feedback how to make this code better. In particular, I'm not sure how to handle the buffer passed to `fnctl`. This way works, but it feels a bit cumbersome. `fcntl` unfortunately doesn't return the length of the path.
We have previously always relied upon an external tool, `ar`, to modify archives
that the compiler produces (staticlibs, rlibs, etc). This approach, however, has
a number of downsides:
* Spawning a process is relatively expensive for small compilations
* Encoding arguments across process boundaries often incurs unnecessary overhead
or lossiness. For example `ar` has a tough time dealing with files that have
the same name in archives, and the compiler copies many files around to ensure
they can be passed to `ar` in a reasonable fashion.
* Most `ar` programs found do **not** have the ability to target arbitrary
platforms, so this is an extra tool which needs to be found/specified when
cross compiling.
The LLVM project has had a tool called `llvm-ar` for quite some time now, but it
wasn't available in the standard LLVM libraries (it was just a standalone
program). Recently, however, in LLVM 3.7, this functionality has been moved to a
library and is now accessible by consumers of LLVM via the `writeArchive`
function.
This commit migrates our archive bindings to no longer invoke `ar` by default
but instead make a library call to LLVM to do various operations. This solves
all of the downsides listed above:
* Archive management is now much faster, for example creating a "hello world"
staticlib is now 6x faster (50ms => 8ms). Linking dynamic libraries also
recently started requiring modification of rlibs, and linking a hello world
dynamic library is now 2x faster.
* The compiler is now one step closer to "hassle free" cross compilation because
no external tool is needed for managing archives, LLVM does the right thing!
This commit does not remove support for calling a system `ar` utility currently.
We will continue to maintain compatibility with LLVM 3.5 and 3.6 looking forward
(so the system LLVM can be used wherever possible), and in these cases we must
shell out to a system utility. All nightly builds of Rust, however, will stop
needing a system `ar`.
This improves diagnostic messages when \u escape is used incorrectly and { is
missing. Instead of saying “unknown character escape: u”, it will now report
that unicode escape sequence is incomplete and suggest what the correct syntax
is.
We have previously always relied upon an external tool, `ar`, to modify archives
that the compiler produces (staticlibs, rlibs, etc). This approach, however, has
a number of downsides:
* Spawning a process is relatively expensive for small compilations
* Encoding arguments across process boundaries often incurs unnecessary overhead
or lossiness. For example `ar` has a tough time dealing with files that have
the same name in archives, and the compiler copies many files around to ensure
they can be passed to `ar` in a reasonable fashion.
* Most `ar` programs found do **not** have the ability to target arbitrary
platforms, so this is an extra tool which needs to be found/specified when
cross compiling.
The LLVM project has had a tool called `llvm-ar` for quite some time now, but it
wasn't available in the standard LLVM libraries (it was just a standalone
program). Recently, however, in LLVM 3.7, this functionality has been moved to a
library and is now accessible by consumers of LLVM via the `writeArchive`
function.
This commit migrates our archive bindings to no longer invoke `ar` by default
but instead make a library call to LLVM to do various operations. This solves
all of the downsides listed above:
* Archive management is now much faster, for example creating a "hello world"
staticlib is now 6x faster (50ms => 8ms). Linking dynamic libraries also
recently started requiring modification of rlibs, and linking a hello world
dynamic library is now 2x faster.
* The compiler is now one step closer to "hassle free" cross compilation because
no external tool is needed for managing archives, LLVM does the right thing!
This commit does not remove support for calling a system `ar` utility currently.
We will continue to maintain compatibility with LLVM 3.5 and 3.6 looking forward
(so the system LLVM can be used wherever possible), and in these cases we must
shell out to a system utility. All nightly builds of Rust, however, will stop
needing a system `ar`.
Previously on Windows a directory junction would return false from `is_dir`,
causing various odd behavior, specifically calls to `create_dir_all` might fail
when they would otherwise continue to succeed.
Closes#26716
This allows CString and CStr to be used with the Cow type,
which is extremely useful when interfacing with C libraries
that make extensive use of C-style strings.
There are a number of problems with MSVC landing pads today:
* They only work about 80% of the time with optimizations enabled. For example when running the run-pass test suite a failing test will cause `compiletest` to segfault (b/c of a thread panic). There are also a large number of run-fail tests which will simply crash.
* Enabling landing pads caused the regression seen in #26915.
Overall it looks like LLVM's support for MSVC landing pads isn't as robust as we'd like for now, so let's take a little more time before we turn them on by default.
Closes#26915
This allows CString and CStr to be used with the Cow type,
which is extremely useful when interfacing with C libraries
that make extensive use of C-style strings.
I find that isn't supported on the current API and I think is necesary.
It is my first PR to rust (I'm not a rust expert and I'm not sure if this is the better way to propose this thinks), of course any suggestion of change will be welcome.
I'm almost sure that in windows aren't supported this filetypes, then, i put in the api of win::fs the functions with a fixed false in the response, I hope this is correct.
In a followup to PR #26849, improve one more location for I/O where
we can use `Vec::resize` to ensure better performance when zeroing
buffers.
Use the `vec![elt; n]` macro everywhere we can in the tree. It replaces
`repeat(elt).take(n).collect()` which is more verbose, requires type
hints, and right now produces worse code. `vec![]` is preferable for vector
initialization.
The `vec![]` replacement touches upon one I/O path too, Stdin::read
for windows, and that should be a small improvement.
r? @alexcrichton
The common pattern `iter::repeat(elt).take(n).collect::<Vec<_>>()` is
exactly equivalent to `vec![elt; n]`, do this replacement in the whole
tree.
(Actually, vec![] is smart enough to only call clone n - 1 times, while
the former solution would call clone n times, and this fact is
virtually irrelevant in practice.)
Exploiting the fact that getting the length of the slices is known, we
can use a counted loop instead of iterators, which means that we only
need a single counter, instead of having to increment and check one
pointer for each iterator.
Benchmarks comparing vectors with 100,000 elements:
Before:
```
running 8 tests
test eq1_u8 ... bench: 66,757 ns/iter (+/- 113)
test eq2_u16 ... bench: 111,267 ns/iter (+/- 149)
test eq3_u32 ... bench: 126,282 ns/iter (+/- 111)
test eq4_u64 ... bench: 126,418 ns/iter (+/- 155)
test ne1_u8 ... bench: 88,990 ns/iter (+/- 161)
test ne2_u16 ... bench: 89,126 ns/iter (+/- 265)
test ne3_u32 ... bench: 96,901 ns/iter (+/- 92)
test ne4_u64 ... bench: 96,750 ns/iter (+/- 137)
```
After:
```
running 8 tests
test eq1_u8 ... bench: 46,413 ns/iter (+/- 521)
test eq2_u16 ... bench: 46,500 ns/iter (+/- 74)
test eq3_u32 ... bench: 50,059 ns/iter (+/- 92)
test eq4_u64 ... bench: 54,001 ns/iter (+/- 92)
test ne1_u8 ... bench: 47,595 ns/iter (+/- 53)
test ne2_u16 ... bench: 47,521 ns/iter (+/- 59)
test ne3_u32 ... bench: 44,889 ns/iter (+/- 74)
test ne4_u64 ... bench: 47,775 ns/iter (+/- 68)
```