rust/src/test/run-pass/issue-11881.rs

66 lines
1.7 KiB
Rust
Raw Normal View History

// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
Fix orphan checking (cc #19470). (This is not a complete fix of #19470 because of the backwards compatibility feature gate.) This is a [breaking-change]. The new rules require that, for an impl of a trait defined in some other crate, two conditions must hold: 1. Some type must be local. 2. Every type parameter must appear "under" some local type. Here are some examples that are legal: ```rust struct MyStruct<T> { ... } // Here `T` appears "under' `MyStruct`. impl<T> Clone for MyStruct<T> { } // Here `T` appears "under' `MyStruct` as well. Note that it also appears // elsewhere. impl<T> Iterator<T> for MyStruct<T> { } ``` Here is an illegal example: ```rust // Here `U` does not appear "under" `MyStruct` or any other local type. // We call `U` "uncovered". impl<T,U> Iterator<U> for MyStruct<T> { } ``` There are a couple of ways to rewrite this last example so that it is legal: 1. In some cases, the uncovered type parameter (here, `U`) should be converted into an associated type. This is however a non-local change that requires access to the original trait. Also, associated types are not fully baked. 2. Add `U` as a type parameter of `MyStruct`: ```rust struct MyStruct<T,U> { ... } impl<T,U> Iterator<U> for MyStruct<T,U> { } ``` 3. Create a newtype wrapper for `U` ```rust impl<T,U> Iterator<Wrapper<U>> for MyStruct<T,U> { } ``` Because associated types are not fully baked, which in the case of the `Hash` trait makes adhering to this rule impossible, you can temporarily disable this rule in your crate by using `#![feature(old_orphan_check)]`. Note that the `old_orphan_check` feature will be removed before 1.0 is released.
2014-12-26 02:30:51 -06:00
#![feature(old_orphan_check)]
extern crate rbml;
extern crate serialize;
remove seek from std::io::MemWriter, add SeekableMemWriter to librustc Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed. ``` test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85) test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57) test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99) test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27) test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28) test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2) test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51) test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s ``` [breaking-change]
2014-07-29 18:31:39 -05:00
use std::io;
use std::fmt;
use std::io::{IoResult, SeekStyle};
remove seek from std::io::MemWriter, add SeekableMemWriter to librustc Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed. ``` test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85) test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57) test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99) test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27) test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28) test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2) test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51) test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s ``` [breaking-change]
2014-07-29 18:31:39 -05:00
use std::slice;
use serialize::{Encodable, Encoder};
use serialize::json;
remove seek from std::io::MemWriter, add SeekableMemWriter to librustc Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed. ``` test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85) test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57) test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99) test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27) test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28) test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2) test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51) test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s ``` [breaking-change]
2014-07-29 18:31:39 -05:00
2014-07-29 20:27:28 -05:00
use rbml::writer;
use rbml::io::SeekableMemWriter;
#[derive(Encodable)]
struct Foo {
baz: bool,
}
#[derive(Encodable)]
struct Bar {
froboz: uint,
}
enum WireProtocol {
JSON,
RBML,
// ...
}
fn encode_json<
T: for<'a> Encodable<json::Encoder<'a>,
fmt::Error>>(val: &T,
wr: &mut SeekableMemWriter) {
write!(wr, "{}", json::as_json(val));
}
fn encode_rbml<'a,
remove seek from std::io::MemWriter, add SeekableMemWriter to librustc Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed. ``` test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85) test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57) test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99) test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27) test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28) test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2) test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51) test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s ``` [breaking-change]
2014-07-29 18:31:39 -05:00
T: Encodable<writer::Encoder<'a, SeekableMemWriter>,
io::IoError>>(val: &T,
remove seek from std::io::MemWriter, add SeekableMemWriter to librustc Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed. ``` test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85) test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57) test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99) test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27) test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28) test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2) test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51) test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s ``` [breaking-change]
2014-07-29 18:31:39 -05:00
wr: &'a mut SeekableMemWriter) {
let mut encoder = writer::Encoder::new(wr);
val.encode(&mut encoder);
}
pub fn main() {
let target = Foo{baz: false,};
remove seek from std::io::MemWriter, add SeekableMemWriter to librustc Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed. ``` test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85) test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57) test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99) test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27) test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28) test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2) test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51) test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s ``` [breaking-change]
2014-07-29 18:31:39 -05:00
let mut wr = SeekableMemWriter::new();
let proto = WireProtocol::JSON;
match proto {
WireProtocol::JSON => encode_json(&target, &mut wr),
WireProtocol::RBML => encode_rbml(&target, &mut wr)
}
}