Rollup merge of #58454 - pitdicker:windows_stdio, r=alexcrichton

Refactor Windows stdio and remove stdin double buffering

I was looking for something nice and small to work on, tried to tackle a few FIXME's in Windows stdio, and things grew from there.

This part of the standard library contains some tricky code, and has changed over the years to handle more corner cases. It could use some refactoring and extra comments.

Changes/fixes:
- Made `StderrRaw` `pub(crate)`, to remove the `Write` implementations on `sys::Stderr` (used unsynchronised for panic output).
- Remove the unused `Read` implementation on `sys::windows::stdin`
- The `windows::stdio::Output` enum made sense when we cached the handles, but we can use simple functions like `is_console` now that we get the handle on every read/write
- `write` can now calculate the number of written bytes as UTF-8 when we can't write all `u16`s.
- If `write` could only write one half of a surrogate pair, attempt another write for the other because user code can't reslice in any way that would allow us to write it otherwise.
- Removed the double buffering on stdin. Documentation on the unexposed `StdinRaw` says: 'This handle is not synchronized or buffered in any fashion'; which is now true.
- `sys::windows::Stdin` now always only partially fills its buffer, so we can guarantee any arbitrary UTF-16 can be re-encoded without losing any data.
- `sys::windows::STDIN_BUF_SIZE` is slightly larger to compensate. There should be no real change in the number of syscalls the buffered `Stdin` does. This buffer is a little larger, while the extra buffer on Stdin is gone.
- `sys::windows::Stdin` now attempts to handle unpaired surrogates at its buffer boundary.
- `sys::windows::Stdin` no langer allocates for its buffer, but the UTF-16 decoding still does.

### Testing
I did some manual testing of reading and writing to console. The console does support UTF-16 in some sense, but doesn't supporting displaying characters outside the BMP.
- compile stage 1 stdlib with a tiny value for `MAX_BUFFER_SIZE` to make it easier to catch corner cases
- run a simple test program that reads on stdin, and echo's to stdout
- write some lines with plenty of ASCII and emoji in a text editor
- copy and paste in console to stdin
- return with `\r\n\` or CTRL-Z
- copy and paste in text editor
- check it round-trips

-----

Fixes https://github.com/rust-lang/rust/issues/23344. All but one of the suggestions in that issue are now implemented. the missing one is:

> * When reading data, we require the entire set of input to be valid UTF-16. We should instead attempt to read as much of the input as possible as valid UTF-16, only returning an error for the actual invalid elements. For example if we read 10 elements, 5 of which are valid UTF-16, the 6th is bad, and then the remaining are all valid UTF-16, we should probably return the first 5 on a call to `read`, then return an error, then return the remaining on the next call to `read`.

Stdin in Console mode is dealing with text directly input by a user. In my opinion getting an unpaired surrogate is quite unlikely in that case, and a valid reason to error on the entire line of input (which is probably short). Dealing with it is incompatible with an unbuffered stdin, which seems the more interesting guarantee to me.
This commit is contained in:
Mazdak Farrokhzad 2019-02-24 05:56:00 +01:00 committed by GitHub
commit 4dcb7af0e7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 275 additions and 243 deletions

View File

@ -9,8 +9,10 @@ impl Stdin {
pub fn new() -> io::Result<Stdin> {
Ok(Stdin(()))
}
}
pub fn read(&self, _: &mut [u8]) -> io::Result<usize> {
impl io::Read for Stdin {
fn read(&mut self, _buf: &mut [u8]) -> io::Result<usize> {
Ok(0)
}
}
@ -19,15 +21,17 @@ impl Stdout {
pub fn new() -> io::Result<Stdout> {
Ok(Stdout(()))
}
}
pub fn write(&self, _: &[u8]) -> io::Result<usize> {
impl io::Write for Stdout {
fn write(&mut self, _buf: &[u8]) -> io::Result<usize> {
Err(io::Error::new(
io::ErrorKind::BrokenPipe,
"Stdout is not connected to any output in this environment",
))
}
pub fn flush(&self) -> io::Result<()> {
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
@ -36,29 +40,18 @@ impl Stderr {
pub fn new() -> io::Result<Stderr> {
Ok(Stderr(()))
}
}
pub fn write(&self, _: &[u8]) -> io::Result<usize> {
impl io::Write for Stderr {
fn write(&mut self, _buf: &[u8]) -> io::Result<usize> {
Err(io::Error::new(
io::ErrorKind::BrokenPipe,
"Stderr is not connected to any output in this environment",
))
}
pub fn flush(&self) -> io::Result<()> {
Ok(())
}
}
// FIXME: right now this raw stderr handle is used in a few places because
// std::io::stderr_raw isn't exposed, but once that's exposed this impl
// should go away
impl io::Write for Stderr {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
Stderr::write(self, data)
}
fn flush(&mut self) -> io::Result<()> {
Stderr::flush(self)
Ok(())
}
}

View File

@ -8,10 +8,12 @@
impl Stdin {
pub fn new() -> io::Result<Stdin> { Ok(Stdin(())) }
}
pub fn read(&self, data: &mut [u8]) -> io::Result<usize> {
impl io::Read for Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let fd = FileDesc::new(0);
let ret = fd.read(data);
let ret = fd.read(buf);
fd.into_raw();
ret
}
@ -19,44 +21,35 @@ pub fn read(&self, data: &mut [u8]) -> io::Result<usize> {
impl Stdout {
pub fn new() -> io::Result<Stdout> { Ok(Stdout(())) }
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
impl io::Write for Stdout {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let fd = FileDesc::new(1);
let ret = fd.write(data);
let ret = fd.write(buf);
fd.into_raw();
ret
}
pub fn flush(&self) -> io::Result<()> {
fn flush(&mut self) -> io::Result<()> {
cvt(syscall::fsync(1)).and(Ok(()))
}
}
impl Stderr {
pub fn new() -> io::Result<Stderr> { Ok(Stderr(())) }
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
impl io::Write for Stderr {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let fd = FileDesc::new(2);
let ret = fd.write(data);
let ret = fd.write(buf);
fd.into_raw();
ret
}
pub fn flush(&self) -> io::Result<()> {
cvt(syscall::fsync(2)).and(Ok(()))
}
}
// FIXME: right now this raw stderr handle is used in a few places because
// std::io::stderr_raw isn't exposed, but once that's exposed this impl
// should go away
impl io::Write for Stderr {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
Stderr::write(self, data)
}
fn flush(&mut self) -> io::Result<()> {
Stderr::flush(self)
cvt(syscall::fsync(2)).and(Ok(()))
}
}

View File

@ -16,46 +16,39 @@ fn with_std_fd<F: FnOnce(&FileDesc) -> R, R>(fd: abi::Fd, f: F) -> R {
impl Stdin {
pub fn new() -> io::Result<Stdin> { Ok(Stdin(())) }
}
pub fn read(&self, data: &mut [u8]) -> io::Result<usize> {
with_std_fd(abi::FD_STDIN, |fd| fd.read(data))
impl io::Read for Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
with_std_fd(abi::FD_STDIN, |fd| fd.read(buf))
}
}
impl Stdout {
pub fn new() -> io::Result<Stdout> { Ok(Stdout(())) }
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
with_std_fd(abi::FD_STDOUT, |fd| fd.write(data))
}
pub fn flush(&self) -> io::Result<()> {
impl io::Write for Stdout {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
with_std_fd(abi::FD_STDOUT, |fd| fd.write(buf))
}
fn flush(&mut self) -> io::Result<()> {
with_std_fd(abi::FD_STDOUT, |fd| fd.flush())
}
}
impl Stderr {
pub fn new() -> io::Result<Stderr> { Ok(Stderr(())) }
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
with_std_fd(abi::FD_STDERR, |fd| fd.write(data))
}
pub fn flush(&self) -> io::Result<()> {
with_std_fd(abi::FD_STDERR, |fd| fd.flush())
}
}
// FIXME: right now this raw stderr handle is used in a few places because
// std::io::stderr_raw isn't exposed, but once that's exposed this impl
// should go away
impl io::Write for Stderr {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
Stderr::write(self, data)
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
with_std_fd(abi::FD_STDERR, |fd| fd.write(buf))
}
fn flush(&mut self) -> io::Result<()> {
Stderr::flush(self)
with_std_fd(abi::FD_STDERR, |fd| fd.flush())
}
}

View File

@ -8,10 +8,12 @@
impl Stdin {
pub fn new() -> io::Result<Stdin> { Ok(Stdin(())) }
}
pub fn read(&self, data: &mut [u8]) -> io::Result<usize> {
impl io::Read for Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let fd = FileDesc::new(libc::STDIN_FILENO);
let ret = fd.read(data);
let ret = fd.read(buf);
fd.into_raw(); // do not close this FD
ret
}
@ -19,44 +21,35 @@ pub fn read(&self, data: &mut [u8]) -> io::Result<usize> {
impl Stdout {
pub fn new() -> io::Result<Stdout> { Ok(Stdout(())) }
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
impl io::Write for Stdout {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let fd = FileDesc::new(libc::STDOUT_FILENO);
let ret = fd.write(data);
let ret = fd.write(buf);
fd.into_raw(); // do not close this FD
ret
}
pub fn flush(&self) -> io::Result<()> {
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
impl Stderr {
pub fn new() -> io::Result<Stderr> { Ok(Stderr(())) }
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
impl io::Write for Stderr {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let fd = FileDesc::new(libc::STDERR_FILENO);
let ret = fd.write(data);
let ret = fd.write(buf);
fd.into_raw(); // do not close this FD
ret
}
pub fn flush(&self) -> io::Result<()> {
Ok(())
}
}
// FIXME: right now this raw stderr handle is used in a few places because
// std::io::stderr_raw isn't exposed, but once that's exposed this impl
// should go away
impl io::Write for Stderr {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
Stderr::write(self, data)
}
fn flush(&mut self) -> io::Result<()> {
Stderr::flush(self)
Ok(())
}
}

View File

@ -9,9 +9,11 @@ impl Stdin {
pub fn new() -> io::Result<Stdin> {
Ok(Stdin)
}
}
pub fn read(&self, data: &mut [u8]) -> io::Result<usize> {
Ok(ReadSysCall::perform(0, data))
impl io::Read for Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
Ok(ReadSysCall::perform(0, buf))
}
}
@ -19,13 +21,15 @@ impl Stdout {
pub fn new() -> io::Result<Stdout> {
Ok(Stdout)
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
WriteSysCall::perform(1, data);
Ok(data.len())
}
pub fn flush(&self) -> io::Result<()> {
impl io::Write for Stdout {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
WriteSysCall::perform(1, buf);
Ok(buf.len())
}
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
@ -34,23 +38,16 @@ impl Stderr {
pub fn new() -> io::Result<Stderr> {
Ok(Stderr)
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
WriteSysCall::perform(2, data);
Ok(data.len())
}
pub fn flush(&self) -> io::Result<()> {
Ok(())
}
}
impl io::Write for Stderr {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
(&*self).write(data)
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
WriteSysCall::perform(2, buf);
Ok(buf.len())
}
fn flush(&mut self) -> io::Result<()> {
(&*self).flush()
Ok(())
}
}

View File

@ -252,9 +252,9 @@ fn to_handle(&self, stdio_id: c::DWORD, pipe: &mut Option<AnonPipe>)
// should still be unavailable so propagate the
// INVALID_HANDLE_VALUE.
Stdio::Inherit => {
match stdio::get(stdio_id) {
match stdio::get_handle(stdio_id) {
Ok(io) => {
let io = Handle::new(io.handle());
let io = Handle::new(io);
let ret = io.duplicate(0, true,
c::DUPLICATE_SAME_ACCESS);
io.into_raw();

View File

@ -1,152 +1,259 @@
#![unstable(issue = "0", feature = "windows_stdio")]
use io::prelude::*;
use char::decode_utf16;
use cmp;
use io::{self, Cursor};
use io;
use ptr;
use str;
use sync::Mutex;
use sys::c;
use sys::cvt;
use sys::handle::Handle;
pub enum Output {
Console(c::HANDLE),
Pipe(c::HANDLE),
}
// Don't cache handles but get them fresh for every read/write. This allows us to track changes to
// the value over time (such as if a process calls `SetStdHandle` while it's running). See #40490.
pub struct Stdin {
utf8: Mutex<io::Cursor<Vec<u8>>>,
surrogate: u16,
}
pub struct Stdout;
pub struct Stderr;
pub fn get(handle: c::DWORD) -> io::Result<Output> {
let handle = unsafe { c::GetStdHandle(handle) };
// Apparently Windows doesn't handle large reads on stdin or writes to stdout/stderr well (see
// #13304 for details).
//
// From MSDN (2011): "The storage for this buffer is allocated from a shared heap for the
// process that is 64 KB in size. The maximum size of the buffer will depend on heap usage."
//
// We choose the cap at 8 KiB because libuv does the same, and it seems to be acceptable so far.
const MAX_BUFFER_SIZE: usize = 8192;
// The standard buffer size of BufReader for Stdin should be able to hold 3x more bytes than there
// are `u16`'s in MAX_BUFFER_SIZE. This ensures the read data can always be completely decoded from
// UTF-16 to UTF-8.
pub const STDIN_BUF_SIZE: usize = MAX_BUFFER_SIZE / 2 * 3;
pub fn get_handle(handle_id: c::DWORD) -> io::Result<c::HANDLE> {
let handle = unsafe { c::GetStdHandle(handle_id) };
if handle == c::INVALID_HANDLE_VALUE {
Err(io::Error::last_os_error())
} else if handle.is_null() {
Err(io::Error::from_raw_os_error(c::ERROR_INVALID_HANDLE as i32))
} else {
let mut out = 0;
match unsafe { c::GetConsoleMode(handle, &mut out) } {
0 => Ok(Output::Pipe(handle)),
_ => Ok(Output::Console(handle)),
}
Ok(handle)
}
}
fn write(handle: c::DWORD, data: &[u8]) -> io::Result<usize> {
let handle = match get(handle)? {
Output::Console(c) => c,
Output::Pipe(p) => {
let handle = Handle::new(p);
fn is_console(handle: c::HANDLE) -> bool {
// `GetConsoleMode` will return false (0) if this is a pipe (we don't care about the reported
// mode). This will only detect Windows Console, not other terminals connected to a pipe like
// MSYS. Which is exactly what we need, as only Windows Console needs a conversion to UTF-16.
let mut mode = 0;
unsafe { c::GetConsoleMode(handle, &mut mode) != 0 }
}
fn write(handle_id: c::DWORD, data: &[u8]) -> io::Result<usize> {
let handle = get_handle(handle_id)?;
if !is_console(handle) {
let handle = Handle::new(handle);
let ret = handle.write(data);
handle.into_raw();
return ret
handle.into_raw(); // Don't close the handle
return ret;
}
};
// As with stdin on windows, stdout often can't handle writes of large
// sizes. For an example, see #14940. For this reason, don't try to
// write the entire output buffer on windows.
// As the console is meant for presenting text, we assume bytes of `data` come from a string
// and are encoded as UTF-8, which needs to be encoded as UTF-16.
//
// For some other references, it appears that this problem has been
// encountered by others [1] [2]. We choose the number 8K just because
// libuv does the same.
//
// [1]: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1232
// [2]: http://www.mail-archive.com/log4net-dev@logging.apache.org/msg00661.html
const OUT_MAX: usize = 8192;
let len = cmp::min(data.len(), OUT_MAX);
// If the data is not valid UTF-8 we write out as many bytes as are valid.
// Only when there are no valid bytes (which will happen on the next call), return an error.
let len = cmp::min(data.len(), MAX_BUFFER_SIZE / 2);
let utf8 = match str::from_utf8(&data[..len]) {
Ok(s) => s,
Err(ref e) if e.valid_up_to() == 0 => return Err(invalid_encoding()),
Err(ref e) if e.valid_up_to() == 0 => {
return Err(io::Error::new(io::ErrorKind::InvalidData,
"Windows stdio in console mode does not support writing non-UTF-8 byte sequences"))
},
Err(e) => str::from_utf8(&data[..e.valid_up_to()]).unwrap(),
};
let utf16 = utf8.encode_utf16().collect::<Vec<u16>>();
let mut utf16 = [0u16; MAX_BUFFER_SIZE / 2];
let mut len_utf16 = 0;
for (chr, dest) in utf8.encode_utf16().zip(utf16.iter_mut()) {
*dest = chr;
len_utf16 += 1;
}
let utf16 = &utf16[..len_utf16];
let mut written = write_u16s(handle, &utf16)?;
// Figure out how many bytes of as UTF-8 were written away as UTF-16.
if written == utf16.len() {
Ok(utf8.len())
} else {
// Make sure we didn't end up writing only half of a surrogate pair (even though the chance
// is tiny). Because it is not possible for user code to re-slice `data` in such a way that
// a missing surrogate can be produced (and also because of the UTF-8 validation above),
// write the missing surrogate out now.
// Buffering it would mean we have to lie about the number of bytes written.
let first_char_remaining = utf16[written];
if first_char_remaining >= 0xDCEE && first_char_remaining <= 0xDFFF { // low surrogate
// We just hope this works, and give up otherwise
let _ = write_u16s(handle, &utf16[written..written+1]);
written += 1;
}
// Calculate the number of bytes of `utf8` that were actually written.
let mut count = 0;
for ch in utf16[..written].iter() {
count += match ch {
0x0000 ..= 0x007F => 1,
0x0080 ..= 0x07FF => 2,
0xDCEE ..= 0xDFFF => 1, // Low surrogate. We already counted 3 bytes for the other.
_ => 3,
};
}
debug_assert!(String::from_utf16(&utf16[..written]).unwrap() == utf8[..count]);
Ok(count)
}
}
fn write_u16s(handle: c::HANDLE, data: &[u16]) -> io::Result<usize> {
let mut written = 0;
cvt(unsafe {
c::WriteConsoleW(handle,
utf16.as_ptr() as c::LPCVOID,
utf16.len() as u32,
data.as_ptr() as c::LPCVOID,
data.len() as u32,
&mut written,
ptr::null_mut())
})?;
// FIXME if this only partially writes the utf16 buffer then we need to
// figure out how many bytes of `data` were actually written
assert_eq!(written as usize, utf16.len());
Ok(utf8.len())
Ok(written as usize)
}
impl Stdin {
pub fn new() -> io::Result<Stdin> {
Ok(Stdin {
utf8: Mutex::new(Cursor::new(Vec::new())),
})
Ok(Stdin { surrogate: 0 })
}
}
pub fn read(&self, buf: &mut [u8]) -> io::Result<usize> {
let handle = match get(c::STD_INPUT_HANDLE)? {
Output::Console(c) => c,
Output::Pipe(p) => {
let handle = Handle::new(p);
impl io::Read for Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let handle = get_handle(c::STD_INPUT_HANDLE)?;
if !is_console(handle) {
let handle = Handle::new(handle);
let ret = handle.read(buf);
handle.into_raw();
return ret
handle.into_raw(); // Don't close the handle
return ret;
}
if buf.len() == 0 {
return Ok(0);
} else if buf.len() < 4 {
return Err(io::Error::new(io::ErrorKind::InvalidInput,
"Windows stdin in console mode does not support a buffer too small to \
guarantee holding one arbitrary UTF-8 character (4 bytes)"))
}
let mut utf16_buf = [0u16; MAX_BUFFER_SIZE / 2];
// In the worst case, an UTF-8 string can take 3 bytes for every `u16` of an UTF-16. So
// we can read at most a third of `buf.len()` chars and uphold the guarantee no data gets
// lost.
let amount = cmp::min(buf.len() / 3, utf16_buf.len());
let read = read_u16s_fixup_surrogates(handle, &mut utf16_buf, amount, &mut self.surrogate)?;
utf16_to_utf8(&utf16_buf[..read], buf)
}
}
// We assume that if the last `u16` is an unpaired surrogate they got sliced apart by our
// buffer size, and keep it around for the next read hoping to put them together.
// This is a best effort, and may not work if we are not the only reader on Stdin.
fn read_u16s_fixup_surrogates(handle: c::HANDLE,
buf: &mut [u16],
mut amount: usize,
surrogate: &mut u16) -> io::Result<usize>
{
// Insert possibly remaining unpaired surrogate from last read.
let mut start = 0;
if *surrogate != 0 {
buf[0] = *surrogate;
*surrogate = 0;
start = 1;
if amount == 1 {
// Special case: `Stdin::read` guarantees we can always read at least one new `u16`
// and combine it with an unpaired surrogate, because the UTF-8 buffer is at least
// 4 bytes.
amount = 2;
}
}
let mut amount = read_u16s(handle, &mut buf[start..amount])? + start;
if amount > 0 {
let last_char = buf[amount - 1];
if last_char >= 0xD800 && last_char <= 0xDBFF { // high surrogate
*surrogate = last_char;
amount -= 1;
}
}
Ok(amount)
}
fn read_u16s(handle: c::HANDLE, buf: &mut [u16]) -> io::Result<usize> {
// Configure the `pInputControl` parameter to not only return on `\r\n` but also Ctrl-Z, the
// traditional DOS method to indicate end of character stream / user input (SUB).
// See #38274 and https://stackoverflow.com/questions/43836040/win-api-readconsole.
const CTRL_Z: u16 = 0x1A;
const CTRL_Z_MASK: c::ULONG = 1 << CTRL_Z;
let mut input_control = c::CONSOLE_READCONSOLE_CONTROL {
nLength: ::mem::size_of::<c::CONSOLE_READCONSOLE_CONTROL>() as c::ULONG,
nInitialChars: 0,
dwCtrlWakeupMask: CTRL_Z_MASK,
dwControlKeyState: 0,
};
let mut utf8 = self.utf8.lock().unwrap();
// Read more if the buffer is empty
if utf8.position() as usize == utf8.get_ref().len() {
let mut utf16 = vec![0u16; 0x1000];
let mut num = 0;
let mut input_control = readconsole_input_control(CTRL_Z_MASK);
let mut amount = 0;
cvt(unsafe {
c::ReadConsoleW(handle,
utf16.as_mut_ptr() as c::LPVOID,
utf16.len() as u32,
&mut num,
buf.as_mut_ptr() as c::LPVOID,
buf.len() as u32,
&mut amount,
&mut input_control as c::PCONSOLE_READCONSOLE_CONTROL)
})?;
utf16.truncate(num as usize);
// FIXME: what to do about this data that has already been read?
let mut data = match String::from_utf16(&utf16) {
Ok(utf8) => utf8.into_bytes(),
Err(..) => return Err(invalid_encoding()),
};
if let Some(&last_byte) = data.last() {
if last_byte == CTRL_Z {
data.pop();
if amount > 0 && buf[amount as usize - 1] == CTRL_Z {
amount -= 1;
}
}
*utf8 = Cursor::new(data);
Ok(amount as usize)
}
// MemReader shouldn't error here since we just filled it
utf8.read(buf)
#[allow(unused)]
fn utf16_to_utf8(utf16: &[u16], utf8: &mut [u8]) -> io::Result<usize> {
let mut written = 0;
for chr in decode_utf16(utf16.iter().cloned()) {
match chr {
Ok(chr) => {
chr.encode_utf8(&mut utf8[written..]);
written += chr.len_utf8();
}
Err(_) => {
// We can't really do any better than forget all data and return an error.
return Err(io::Error::new(io::ErrorKind::InvalidData,
"Windows stdin in console mode does not support non-UTF-16 input; \
encountered unpaired surrogate"))
}
}
#[unstable(reason = "not public", issue = "0", feature = "fd_read")]
impl<'a> Read for &'a Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
(**self).read(buf)
}
Ok(written)
}
impl Stdout {
pub fn new() -> io::Result<Stdout> {
Ok(Stdout)
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
write(c::STD_OUTPUT_HANDLE, data)
}
pub fn flush(&self) -> io::Result<()> {
impl io::Write for Stdout {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
write(c::STD_OUTPUT_HANDLE, buf)
}
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
@ -155,66 +262,22 @@ impl Stderr {
pub fn new() -> io::Result<Stderr> {
Ok(Stderr)
}
pub fn write(&self, data: &[u8]) -> io::Result<usize> {
write(c::STD_ERROR_HANDLE, data)
}
pub fn flush(&self) -> io::Result<()> {
Ok(())
}
}
// FIXME: right now this raw stderr handle is used in a few places because
// std::io::stderr_raw isn't exposed, but once that's exposed this impl
// should go away
impl io::Write for Stderr {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
Stderr::write(self, data)
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
write(c::STD_ERROR_HANDLE, buf)
}
fn flush(&mut self) -> io::Result<()> {
Stderr::flush(self)
Ok(())
}
}
impl Output {
pub fn handle(&self) -> c::HANDLE {
match *self {
Output::Console(c) => c,
Output::Pipe(c) => c,
}
}
}
fn invalid_encoding() -> io::Error {
io::Error::new(io::ErrorKind::InvalidData,
"Windows stdio in console mode does not support non-UTF-8 byte sequences; \
see https://github.com/rust-lang/rust/issues/23344")
}
fn readconsole_input_control(wakeup_mask: c::ULONG) -> c::CONSOLE_READCONSOLE_CONTROL {
c::CONSOLE_READCONSOLE_CONTROL {
nLength: ::mem::size_of::<c::CONSOLE_READCONSOLE_CONTROL>() as c::ULONG,
nInitialChars: 0,
dwCtrlWakeupMask: wakeup_mask,
dwControlKeyState: 0,
}
}
const CTRL_Z: u8 = 0x1A;
const CTRL_Z_MASK: c::ULONG = 0x4000000; //1 << 0x1A
pub fn is_ebadf(err: &io::Error) -> bool {
err.raw_os_error() == Some(c::ERROR_INVALID_HANDLE as i32)
}
// The default buffer capacity is 64k, but apparently windows
// doesn't like 64k reads on stdin. See #13304 for details, but the
// idea is that on windows we use a slightly smaller buffer that's
// been seen to be acceptable.
pub const STDIN_BUF_SIZE: usize = 8 * 1024;
pub fn panic_output() -> Option<impl io::Write> {
Stderr::new().ok()
}