![]() |
App Engine Python SDK
v1.6.9 rev.445
The Python runtime is available as an experimental Preview feature.
|
Public Member Functions | |
def | __init__ |
def | reset |
def | nextToken |
def | skip |
def | mTokens |
def | setCharStream |
def | getSourceName |
def | emit |
def | match |
def | matchAny |
def | matchRange |
def | getLine |
def | getCharPositionInLine |
def | getCharIndex |
def | getText |
def | setText |
def | reportError |
self.errorRecovery = True | |
def | getErrorMessage |
def | getCharErrorDisplay |
def | recover |
def | traceIn |
def | traceOut |
![]() | |
def | __init__ |
def | setInput |
def | reset |
def | match |
def | matchAny |
def | mismatchIsUnwantedToken |
def | mismatchIsMissingToken |
def | mismatch |
![]() | |
def | nextToken |
def | __iter__ |
def | next |
Public Attributes | |
input | |
![]() | |
input | |
Properties | |
text = property(getText, setText) | |
Additional Inherited Members | |
![]() | |
int | MEMO_RULE_FAILED = -2 |
int | MEMO_RULE_UNKNOWN = -1 |
DEFAULT_TOKEN_CHANNEL = DEFAULT_CHANNEL | |
HIDDEN = HIDDEN_CHANNEL | |
tokenNames = None | |
tuple | antlr_version = (3, 0, 1, 0) |
string | antlr_version_str = "3.0.1" |
@brief Baseclass for generated lexer classes. A lexer is recognizer that draws input symbols from a character stream. lexer grammars result in a subclass of this object. A Lexer object uses simplified match() and error recovery mechanisms in the interest of speed.
def google.appengine._internal.antlr3.recognizers.Lexer.emit | ( | self, | |
token = None |
|||
) |
The standard method called to automatically emit a token at the outermost lexical rule. The token object should point into the char buffer start..stop. If there is a text override in 'text', use that to set the token's text. Override this method to emit custom Token objects. If you are building trees, then you should also override Parser or TreeParser.getMissingSymbol().
def google.appengine._internal.antlr3.recognizers.Lexer.getCharIndex | ( | self | ) |
What is the index of the current character of lookahead?
def google.appengine._internal.antlr3.recognizers.Lexer.getText | ( | self | ) |
Return the text matched so far for the current token or any text override.
def google.appengine._internal.antlr3.recognizers.Lexer.mTokens | ( | self | ) |
This is the lexer entry point that sets instance var 'token'
def google.appengine._internal.antlr3.recognizers.Lexer.nextToken | ( | self | ) |
Return a token from this source; i.e., match a token on the char stream.
def google.appengine._internal.antlr3.recognizers.Lexer.recover | ( | self, | |
re | |||
) |
Lexers can normally match any char in it's vocabulary after matching a token, so do the easy thing and just kill a character and hope it all works out. You can instead use the rule invocation stack to do sophisticated error recovery if you are in a fragment rule.
def google.appengine._internal.antlr3.recognizers.Lexer.setCharStream | ( | self, | |
input | |||
) |
Set the char stream and reset the lexer
def google.appengine._internal.antlr3.recognizers.Lexer.setText | ( | self, | |
text | |||
) |
Set the complete text of this token; it wipes any previous changes to the text.
def google.appengine._internal.antlr3.recognizers.Lexer.skip | ( | self | ) |
Instruct the lexer to skip creating a token for current lexer rule and look for another token. nextToken() knows to keep looking when a lexer rule finishes with token set to SKIP_TOKEN. Recall that if token==null at end of any token rule, it creates one for you and emits it.