Improve .chars().count()
Use a simpler loop to count the `char` of a string: count the
number of non-continuation bytes. Use `count += <conditional>` which the
compiler understands well and can apply loop optimizations to.
benchmark descriptions and results for two configurations:
- ascii: ascii text
- cy: cyrillic text
- jp: japanese text
- words ascii: counting each split_whitespace item from the ascii text
- words jp: counting each split_whitespace item from the jp text
```
x86-64 rustc -Copt-level=3
name orig_ ns/iter cmov_ ns/iter diff ns/iter diff %
count_ascii 1,453 (1755 MB/s) 1,398 (1824 MB/s) -55 -3.79%
count_cy 5,990 (856 MB/s) 2,545 (2016 MB/s) -3,445 -57.51%
count_jp 3,075 (1169 MB/s) 1,772 (2029 MB/s) -1,303 -42.37%
count_words_ascii 4,157 (521 MB/s) 1,797 (1205 MB/s) -2,360 -56.77%
count_words_jp 3,337 (1071 MB/s) 1,772 (2018 MB/s) -1,565 -46.90%
x86-64 rustc -Ctarget-feature=+avx -Copt-level=3
name orig_ ns/iter cmov_ ns/iter diff ns/iter diff %
count_ascii 1,444 (1766 MB/s) 763 (3343 MB/s) -681 -47.16%
count_cy 5,871 (874 MB/s) 1,527 (3360 MB/s) -4,344 -73.99%
count_jp 2,874 (1251 MB/s) 1,073 (3351 MB/s) -1,801 -62.67%
count_words_ascii 4,131 (524 MB/s) 1,871 (1157 MB/s) -2,260 -54.71%
count_words_jp 3,253 (1099 MB/s) 1,331 (2686 MB/s) -1,922 -59.08%
```
I briefly explored a more involved blocked algorithm (looking at 8 or more bytes at a time),
but the code in this PR was always winning `count_words_ascii` in particular (counting
many small strings); this solution is an improvement without tradeoffs.