A Tokenizer is used to extract tokens from a source stream. Tokens are of the form normally accepted by C-like languages.
XXX This class really needs to become generic, with its C personality moved to CTokenizer
def __init__(self, src):
Constructor. Creates a new Tokenizer given a source file and (optionally) a
set of flags.
Takes the next token off of the front of the buffer and returns it. If the
token type continues onto the next line, continue it.
"breaks off" the regular expression match specified by toksrc from the
buffer, and returns it.
Make sure that the buffer has data in it.
Return the next token in the preparsed list (the list of tokens that has
already been parsed and were put back).
Returns the next token, either from the preparsed cache, or from the stream.
Parses the next token directly off of the stream. Clients should generally
avoid using this. Use nextToken() instead, since that will use the preparsed
queue if tokens have been put back.
Puts the given token back on the list.
__takeIt(self, tokType, toksrc)
def __takeIt(self, tokType, toksrc):
breakOff(self, toksrc)
def breakOff(self, toksrc):
fillBuffer(self)
def fillBuffer(self):
nextPreparsed(self)
def nextPreparsed(self):
nextToken(self)
def nextToken(self):
parseNextToken(self)
def parseNextToken(self):
putBack(self, tok)
def putBack(self, tok):