2015-01-17 11:24:04 -06:00
|
|
|
# Copyright 2015 The Rust Project Developers. See the COPYRIGHT
|
|
|
|
# file at the top-level directory of this distribution and at
|
|
|
|
# http://rust-lang.org/COPYRIGHT.
|
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
|
|
|
# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
|
|
|
|
# option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
r"""
|
|
|
|
htmldocck.py is a custom checker script for Rustdoc HTML outputs.
|
|
|
|
|
|
|
|
# How and why?
|
|
|
|
|
|
|
|
The principle is simple: This script receives a path to generated HTML
|
|
|
|
documentation and a "template" script, which has a series of check
|
|
|
|
commands like `@has` or `@matches`. Each command can be used to check if
|
|
|
|
some pattern is present or not present in the particular file or in
|
|
|
|
the particular node of HTML tree. In many cases, the template script
|
|
|
|
happens to be a source code given to rustdoc.
|
|
|
|
|
|
|
|
While it indeed is possible to test in smaller portions, it has been
|
|
|
|
hard to construct tests in this fashion and major rendering errors were
|
|
|
|
discovered much later. This script is designed for making the black-box
|
|
|
|
and regression testing of Rustdoc easy. This does not preclude the needs
|
|
|
|
for unit testing, but can be used to complement related tests by quickly
|
|
|
|
showing the expected renderings.
|
|
|
|
|
|
|
|
In order to avoid one-off dependencies for this task, this script uses
|
|
|
|
a reasonably working HTML parser and the existing XPath implementation
|
|
|
|
from Python 2's standard library. Hopefully we won't render
|
|
|
|
non-well-formed HTML.
|
|
|
|
|
|
|
|
# Commands
|
|
|
|
|
|
|
|
Commands start with an `@` followed by a command name (letters and
|
|
|
|
hyphens), and zero or more arguments separated by one or more whitespace
|
|
|
|
and optionally delimited with single or double quotes. The `@` mark
|
|
|
|
cannot be preceded by a non-whitespace character. Other lines (including
|
|
|
|
every text up to the first `@`) are ignored, but it is recommended to
|
|
|
|
avoid the use of `@` in the template file.
|
|
|
|
|
|
|
|
There are a number of supported commands:
|
|
|
|
|
|
|
|
* `@has PATH` checks for the existence of given file.
|
|
|
|
|
|
|
|
`PATH` is relative to the output directory. It can be given as `-`
|
|
|
|
which repeats the most recently used `PATH`.
|
|
|
|
|
|
|
|
* `@has PATH PATTERN` and `@matches PATH PATTERN` checks for
|
|
|
|
the occurrence of given `PATTERN` in the given file. Only one
|
|
|
|
occurrence of given pattern is enough.
|
|
|
|
|
|
|
|
For `@has`, `PATTERN` is a whitespace-normalized (every consecutive
|
|
|
|
whitespace being replaced by one single space character) string.
|
|
|
|
The entire file is also whitespace-normalized including newlines.
|
|
|
|
|
|
|
|
For `@matches`, `PATTERN` is a Python-supported regular expression.
|
|
|
|
The file remains intact but the regexp is matched with no `MULTILINE`
|
|
|
|
and `IGNORECASE` option. You can still use a prefix `(?m)` or `(?i)`
|
|
|
|
to override them, and `\A` and `\Z` for definitely matching
|
|
|
|
the beginning and end of the file.
|
|
|
|
|
|
|
|
(The same distinction goes to other variants of these commands.)
|
|
|
|
|
|
|
|
* `@has PATH XPATH PATTERN` and `@matches PATH XPATH PATTERN` checks for
|
|
|
|
the presence of given `XPATH` in the given HTML file, and also
|
|
|
|
the occurrence of given `PATTERN` in the matching node or attribute.
|
|
|
|
Only one occurrence of given pattern in the match is enough.
|
|
|
|
|
|
|
|
`PATH` should be a valid and well-formed HTML file. It does *not*
|
|
|
|
accept arbitrary HTML5; it should have matching open and close tags
|
|
|
|
and correct entity references at least.
|
|
|
|
|
|
|
|
`XPATH` is an XPath expression to match. This is fairly limited:
|
|
|
|
`tag`, `*`, `.`, `//`, `..`, `[@attr]`, `[@attr='value']`, `[tag]`,
|
|
|
|
`[POS]` (element located in given `POS`), `[last()-POS]`, `text()`
|
|
|
|
and `@attr` (both as the last segment) are supported. Some examples:
|
|
|
|
|
|
|
|
- `//pre` or `.//pre` matches any element with a name `pre`.
|
|
|
|
- `//a[@href]` matches any element with an `href` attribute.
|
|
|
|
- `//*[@class="impl"]//code` matches any element with a name `code`,
|
|
|
|
which is an ancestor of some element which `class` attr is `impl`.
|
|
|
|
- `//h1[@class="fqn"]/span[1]/a[last()]/@class` matches a value of
|
|
|
|
`class` attribute in the last `a` element (can be followed by more
|
|
|
|
elements that are not `a`) inside the first `span` in the `h1` with
|
|
|
|
a class of `fqn`. Note that there cannot be no additional elements
|
|
|
|
between them due to the use of `/` instead of `//`.
|
|
|
|
|
|
|
|
Do not try to use non-absolute paths, it won't work due to the flawed
|
|
|
|
ElementTree implementation. The script rejects them.
|
|
|
|
|
|
|
|
For the text matches (i.e. paths not ending with `@attr`), any
|
|
|
|
subelements are flattened into one string; this is handy for ignoring
|
|
|
|
highlights for example. If you want to simply check the presence of
|
|
|
|
given node or attribute, use an empty string (`""`) as a `PATTERN`.
|
|
|
|
|
2015-02-26 09:27:57 -06:00
|
|
|
* `@count PATH XPATH COUNT' checks for the occurrence of given XPath
|
|
|
|
in the given file. The number of occurrences must match the given count.
|
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
All conditions can be negated with `!`. `@!has foo/type.NoSuch.html`
|
|
|
|
checks if the given file does not exist, for example.
|
|
|
|
|
|
|
|
"""
|
|
|
|
|
2015-12-10 10:34:54 -06:00
|
|
|
from __future__ import print_function
|
2015-01-17 11:24:04 -06:00
|
|
|
import sys
|
|
|
|
import os.path
|
|
|
|
import re
|
|
|
|
import shlex
|
|
|
|
from collections import namedtuple
|
|
|
|
from HTMLParser import HTMLParser
|
|
|
|
from xml.etree import cElementTree as ET
|
|
|
|
|
|
|
|
# ⇤/⇥ are not in HTML 4 but are in HTML 5
|
|
|
|
from htmlentitydefs import entitydefs
|
|
|
|
entitydefs['larrb'] = u'\u21e4'
|
|
|
|
entitydefs['rarrb'] = u'\u21e5'
|
2016-05-27 05:04:56 -05:00
|
|
|
entitydefs['nbsp'] = ' '
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
# "void elements" (no closing tag) from the HTML Standard section 12.1.2
|
|
|
|
VOID_ELEMENTS = set(['area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen',
|
|
|
|
'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr'])
|
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
class CustomHTMLParser(HTMLParser):
|
2015-01-27 03:09:00 -06:00
|
|
|
"""simplified HTML parser.
|
|
|
|
|
|
|
|
this is possible because we are dealing with very regular HTML from
|
|
|
|
rustdoc; we only have to deal with i) void elements and ii) empty
|
|
|
|
attributes."""
|
2015-01-17 11:24:04 -06:00
|
|
|
def __init__(self, target=None):
|
|
|
|
HTMLParser.__init__(self)
|
|
|
|
self.__builder = target or ET.TreeBuilder()
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def handle_starttag(self, tag, attrs):
|
|
|
|
attrs = dict((k, v or '') for k, v in attrs)
|
|
|
|
self.__builder.start(tag, attrs)
|
2015-01-27 03:09:00 -06:00
|
|
|
if tag in VOID_ELEMENTS:
|
|
|
|
self.__builder.end(tag)
|
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def handle_endtag(self, tag):
|
|
|
|
self.__builder.end(tag)
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def handle_startendtag(self, tag, attrs):
|
|
|
|
attrs = dict((k, v or '') for k, v in attrs)
|
|
|
|
self.__builder.start(tag, attrs)
|
|
|
|
self.__builder.end(tag)
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def handle_data(self, data):
|
|
|
|
self.__builder.data(data)
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def handle_entityref(self, name):
|
|
|
|
self.__builder.data(entitydefs[name])
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def handle_charref(self, name):
|
|
|
|
code = int(name[1:], 16) if name.startswith(('x', 'X')) else int(name, 10)
|
|
|
|
self.__builder.data(unichr(code).encode('utf-8'))
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def close(self):
|
|
|
|
HTMLParser.close(self)
|
|
|
|
return self.__builder.close()
|
|
|
|
|
2015-12-10 10:34:54 -06:00
|
|
|
Command = namedtuple('Command', 'negated cmd args lineno context')
|
2015-01-17 11:24:04 -06:00
|
|
|
|
2015-12-10 10:34:54 -06:00
|
|
|
class FailedCheck(Exception):
|
|
|
|
pass
|
|
|
|
|
|
|
|
class InvalidCheck(Exception):
|
|
|
|
pass
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 12:23:52 -06:00
|
|
|
def concat_multi_lines(f):
|
2015-01-27 03:09:00 -06:00
|
|
|
"""returns a generator out of the file object, which
|
|
|
|
- removes `\\` then `\n` then a shared prefix with the previous line then
|
|
|
|
optional whitespace;
|
|
|
|
- keeps a line number (starting from 0) of the first line being
|
|
|
|
concatenated."""
|
2015-01-17 12:23:52 -06:00
|
|
|
lastline = None # set to the last line when the last line has a backslash
|
|
|
|
firstlineno = None
|
|
|
|
catenated = ''
|
|
|
|
for lineno, line in enumerate(f):
|
|
|
|
line = line.rstrip('\r\n')
|
|
|
|
|
|
|
|
# strip the common prefix from the current line if needed
|
|
|
|
if lastline is not None:
|
|
|
|
maxprefix = 0
|
|
|
|
for i in xrange(min(len(line), len(lastline))):
|
2015-01-27 03:09:00 -06:00
|
|
|
if line[i] != lastline[i]:
|
|
|
|
break
|
2015-01-17 12:23:52 -06:00
|
|
|
maxprefix += 1
|
|
|
|
line = line[maxprefix:].lstrip()
|
|
|
|
|
|
|
|
firstlineno = firstlineno or lineno
|
|
|
|
if line.endswith('\\'):
|
2015-04-06 17:58:23 -05:00
|
|
|
if lastline is None:
|
|
|
|
lastline = line[:-1]
|
2015-01-17 12:23:52 -06:00
|
|
|
catenated += line[:-1]
|
|
|
|
else:
|
|
|
|
yield firstlineno, catenated + line
|
|
|
|
lastline = None
|
|
|
|
firstlineno = None
|
|
|
|
catenated = ''
|
|
|
|
|
2015-01-17 22:23:34 -06:00
|
|
|
if lastline is not None:
|
2015-12-10 10:34:54 -06:00
|
|
|
print_err(lineno, line, 'Trailing backslash at the end of the file')
|
2015-01-17 22:23:34 -06:00
|
|
|
|
2015-01-17 12:23:52 -06:00
|
|
|
LINE_PATTERN = re.compile(r'''
|
|
|
|
(?<=(?<!\S)@)(?P<negated>!?)
|
|
|
|
(?P<cmd>[A-Za-z]+(?:-[A-Za-z]+)*)
|
|
|
|
(?P<args>.*)$
|
|
|
|
''', re.X)
|
2015-01-27 03:09:00 -06:00
|
|
|
|
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def get_commands(template):
|
|
|
|
with open(template, 'rUb') as f:
|
2015-01-17 12:23:52 -06:00
|
|
|
for lineno, line in concat_multi_lines(f):
|
|
|
|
m = LINE_PATTERN.search(line)
|
2015-01-27 03:09:00 -06:00
|
|
|
if not m:
|
|
|
|
continue
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
negated = (m.group('negated') == '!')
|
|
|
|
cmd = m.group('cmd')
|
|
|
|
args = m.group('args')
|
|
|
|
if args and not args[:1].isspace():
|
2015-12-10 10:34:54 -06:00
|
|
|
print_err(lineno, line, 'Invalid template syntax')
|
|
|
|
continue
|
2015-01-17 11:24:04 -06:00
|
|
|
args = shlex.split(args)
|
2015-12-10 10:34:54 -06:00
|
|
|
yield Command(negated=negated, cmd=cmd, args=args, lineno=lineno+1, context=line)
|
2015-01-17 11:24:04 -06:00
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def _flatten(node, acc):
|
2015-01-27 03:09:00 -06:00
|
|
|
if node.text:
|
|
|
|
acc.append(node.text)
|
2015-01-17 11:24:04 -06:00
|
|
|
for e in node:
|
|
|
|
_flatten(e, acc)
|
2015-01-27 03:09:00 -06:00
|
|
|
if e.tail:
|
|
|
|
acc.append(e.tail)
|
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
def flatten(node):
|
|
|
|
acc = []
|
|
|
|
_flatten(node, acc)
|
|
|
|
return ''.join(acc)
|
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def normalize_xpath(path):
|
|
|
|
if path.startswith('//'):
|
|
|
|
return '.' + path # avoid warnings
|
|
|
|
elif path.startswith('.//'):
|
|
|
|
return path
|
|
|
|
else:
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Non-absolute XPath is not supported due to implementation issues')
|
2015-01-17 11:24:04 -06:00
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
class CachedFiles(object):
|
|
|
|
def __init__(self, root):
|
|
|
|
self.root = root
|
|
|
|
self.files = {}
|
|
|
|
self.trees = {}
|
|
|
|
self.last_path = None
|
|
|
|
|
|
|
|
def resolve_path(self, path):
|
|
|
|
if path != '-':
|
|
|
|
path = os.path.normpath(path)
|
|
|
|
self.last_path = path
|
|
|
|
return path
|
|
|
|
elif self.last_path is None:
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Tried to use the previous path in the first command')
|
2015-01-17 11:24:04 -06:00
|
|
|
else:
|
|
|
|
return self.last_path
|
|
|
|
|
|
|
|
def get_file(self, path):
|
|
|
|
path = self.resolve_path(path)
|
2015-12-10 10:34:54 -06:00
|
|
|
if path in self.files:
|
2015-01-17 11:24:04 -06:00
|
|
|
return self.files[path]
|
2015-12-10 10:34:54 -06:00
|
|
|
|
|
|
|
abspath = os.path.join(self.root, path)
|
|
|
|
if not(os.path.exists(abspath) and os.path.isfile(abspath)):
|
|
|
|
raise FailedCheck('File does not exist {!r}'.format(path))
|
|
|
|
|
|
|
|
with open(abspath) as f:
|
|
|
|
data = f.read()
|
|
|
|
self.files[path] = data
|
|
|
|
return data
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
def get_tree(self, path):
|
|
|
|
path = self.resolve_path(path)
|
2015-12-10 10:34:54 -06:00
|
|
|
if path in self.trees:
|
2015-01-17 11:24:04 -06:00
|
|
|
return self.trees[path]
|
2015-12-10 10:34:54 -06:00
|
|
|
|
|
|
|
abspath = os.path.join(self.root, path)
|
|
|
|
if not(os.path.exists(abspath) and os.path.isfile(abspath)):
|
|
|
|
raise FailedCheck('File does not exist {!r}'.format(path))
|
|
|
|
|
|
|
|
with open(abspath) as f:
|
2015-01-17 11:24:04 -06:00
|
|
|
try:
|
2015-12-10 10:34:54 -06:00
|
|
|
tree = ET.parse(f, CustomHTMLParser())
|
2015-01-17 11:24:04 -06:00
|
|
|
except Exception as e:
|
|
|
|
raise RuntimeError('Cannot parse an HTML file {!r}: {}'.format(path, e))
|
2015-12-10 10:34:54 -06:00
|
|
|
self.trees[path] = tree
|
|
|
|
return self.trees[path]
|
2015-01-17 11:24:04 -06:00
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def check_string(data, pat, regexp):
|
|
|
|
if not pat:
|
|
|
|
return True # special case a presence testing
|
|
|
|
elif regexp:
|
|
|
|
return re.search(pat, data) is not None
|
|
|
|
else:
|
|
|
|
data = ' '.join(data.split())
|
|
|
|
pat = ' '.join(pat.split())
|
|
|
|
return pat in data
|
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def check_tree_attr(tree, path, attr, pat, regexp):
|
|
|
|
path = normalize_xpath(path)
|
|
|
|
ret = False
|
|
|
|
for e in tree.findall(path):
|
2015-12-10 10:34:54 -06:00
|
|
|
if attr in e.attrib:
|
2015-01-17 11:24:04 -06:00
|
|
|
value = e.attrib[attr]
|
|
|
|
else:
|
2015-12-10 10:34:54 -06:00
|
|
|
continue
|
|
|
|
|
|
|
|
ret = check_string(value, pat, regexp)
|
|
|
|
if ret:
|
|
|
|
break
|
2015-01-17 11:24:04 -06:00
|
|
|
return ret
|
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2015-01-17 11:24:04 -06:00
|
|
|
def check_tree_text(tree, path, pat, regexp):
|
|
|
|
path = normalize_xpath(path)
|
|
|
|
ret = False
|
|
|
|
for e in tree.findall(path):
|
|
|
|
try:
|
|
|
|
value = flatten(e)
|
|
|
|
except KeyError:
|
|
|
|
continue
|
|
|
|
else:
|
|
|
|
ret = check_string(value, pat, regexp)
|
2015-01-27 03:09:00 -06:00
|
|
|
if ret:
|
|
|
|
break
|
2015-01-17 11:24:04 -06:00
|
|
|
return ret
|
|
|
|
|
2015-01-27 03:09:00 -06:00
|
|
|
|
2016-04-22 10:53:15 -05:00
|
|
|
def get_tree_count(tree, path):
|
2015-02-26 09:27:57 -06:00
|
|
|
path = normalize_xpath(path)
|
2016-04-22 10:53:15 -05:00
|
|
|
return len(tree.findall(path))
|
2015-02-26 09:27:57 -06:00
|
|
|
|
2015-12-10 10:34:54 -06:00
|
|
|
def stderr(*args):
|
|
|
|
print(*args, file=sys.stderr)
|
2015-02-26 09:27:57 -06:00
|
|
|
|
2015-12-10 10:34:54 -06:00
|
|
|
def print_err(lineno, context, err, message=None):
|
|
|
|
global ERR_COUNT
|
|
|
|
ERR_COUNT += 1
|
|
|
|
stderr("{}: {}".format(lineno, message or err))
|
|
|
|
if message and err:
|
|
|
|
stderr("\t{}".format(err))
|
|
|
|
|
|
|
|
if context:
|
|
|
|
stderr("\t{}".format(context))
|
|
|
|
|
|
|
|
ERR_COUNT = 0
|
|
|
|
|
|
|
|
def check_command(c, cache):
|
|
|
|
try:
|
|
|
|
cerr = ""
|
2015-01-17 11:24:04 -06:00
|
|
|
if c.cmd == 'has' or c.cmd == 'matches': # string test
|
|
|
|
regexp = (c.cmd == 'matches')
|
|
|
|
if len(c.args) == 1 and not regexp: # @has <path> = file existence
|
|
|
|
try:
|
|
|
|
cache.get_file(c.args[0])
|
|
|
|
ret = True
|
2015-12-10 10:34:54 -06:00
|
|
|
except FailedCheck as err:
|
|
|
|
cerr = err.message
|
2015-01-17 11:24:04 -06:00
|
|
|
ret = False
|
|
|
|
elif len(c.args) == 2: # @has/matches <path> <pat> = string test
|
2015-12-10 10:34:54 -06:00
|
|
|
cerr = "`PATTERN` did not match"
|
2015-01-17 11:24:04 -06:00
|
|
|
ret = check_string(cache.get_file(c.args[0]), c.args[1], regexp)
|
|
|
|
elif len(c.args) == 3: # @has/matches <path> <pat> <match> = XML tree test
|
2015-12-10 10:34:54 -06:00
|
|
|
cerr = "`XPATH PATTERN` did not match"
|
2015-01-17 11:24:04 -06:00
|
|
|
tree = cache.get_tree(c.args[0])
|
|
|
|
pat, sep, attr = c.args[1].partition('/@')
|
|
|
|
if sep: # attribute
|
2015-12-10 10:34:54 -06:00
|
|
|
tree = cache.get_tree(c.args[0])
|
|
|
|
ret = check_tree_attr(tree, pat, attr, c.args[2], regexp)
|
2015-01-17 11:24:04 -06:00
|
|
|
else: # normalized text
|
|
|
|
pat = c.args[1]
|
2015-01-27 03:09:00 -06:00
|
|
|
if pat.endswith('/text()'):
|
|
|
|
pat = pat[:-7]
|
2015-01-17 11:24:04 -06:00
|
|
|
ret = check_tree_text(cache.get_tree(c.args[0]), pat, c.args[2], regexp)
|
|
|
|
else:
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Invalid number of @{} arguments'.format(c.cmd))
|
2015-01-17 11:24:04 -06:00
|
|
|
|
2015-02-26 09:27:57 -06:00
|
|
|
elif c.cmd == 'count': # count test
|
|
|
|
if len(c.args) == 3: # @count <path> <pat> <count> = count test
|
2016-04-22 10:53:15 -05:00
|
|
|
expected = int(c.args[2])
|
|
|
|
found = get_tree_count(cache.get_tree(c.args[0]), c.args[1])
|
|
|
|
cerr = "Expected {} occurrences but found {}".format(expected, found)
|
|
|
|
ret = expected == found
|
2015-02-26 09:27:57 -06:00
|
|
|
else:
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Invalid number of @{} arguments'.format(c.cmd))
|
2015-01-17 11:24:04 -06:00
|
|
|
elif c.cmd == 'valid-html':
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Unimplemented @valid-html')
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
elif c.cmd == 'valid-links':
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Unimplemented @valid-links')
|
2015-01-17 11:24:04 -06:00
|
|
|
else:
|
2015-12-10 10:34:54 -06:00
|
|
|
raise InvalidCheck('Unrecognized @{}'.format(c.cmd))
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
if ret == c.negated:
|
2015-12-10 10:34:54 -06:00
|
|
|
raise FailedCheck(cerr)
|
|
|
|
|
|
|
|
except FailedCheck as err:
|
|
|
|
message = '@{}{} check failed'.format('!' if c.negated else '', c.cmd)
|
|
|
|
print_err(c.lineno, c.context, err.message, message)
|
|
|
|
except InvalidCheck as err:
|
|
|
|
print_err(c.lineno, c.context, err.message)
|
|
|
|
|
|
|
|
def check(target, commands):
|
|
|
|
cache = CachedFiles(target)
|
|
|
|
for c in commands:
|
|
|
|
check_command(c, cache)
|
2015-01-17 11:24:04 -06:00
|
|
|
|
|
|
|
if __name__ == '__main__':
|
2015-12-10 10:34:54 -06:00
|
|
|
if len(sys.argv) != 3:
|
|
|
|
stderr('Usage: {} <doc dir> <template>'.format(sys.argv[0]))
|
|
|
|
raise SystemExit(1)
|
|
|
|
|
|
|
|
check(sys.argv[1], get_commands(sys.argv[2]))
|
|
|
|
if ERR_COUNT:
|
|
|
|
stderr("\nEncountered {} errors".format(ERR_COUNT))
|
2015-01-17 11:24:04 -06:00
|
|
|
raise SystemExit(1)
|