Przeglądaj źródła

document lark.Token

tags/gm/2021-09-23T00Z/github.com--lark-parser-lark/0.10.0
Sasank Chilamkurthy 4 lat temu
rodzic
commit
fd08f470e2
2 zmienionych plików z 25 dodań i 1 usunięć
  1. +6
    -1
      docs/classes.rst
  2. +19
    -0
      lark/lexer.py

+ 6
- 1
docs/classes.rst Wyświetl plik

@@ -17,4 +17,9 @@ Tree

.. autoclass:: lark.Tree
:members: pretty, find_pred, find_data, iter_subtrees,
iter_subtrees_topdown
iter_subtrees_topdown

Token
-----

.. autoclass:: lark.Token

+ 19
- 0
lark/lexer.py Wyświetl plik

@@ -90,6 +90,25 @@ class TerminalDef(Serialize):


class Token(Str):
"""Token of a lexer.

When using a lexer, the resulting tokens in the trees will be of the
Token class, which inherits from Python's string. So, normal string
comparisons and operations will work as expected. Tokens also have other
useful attributes.

Attributes:
type_: Name of the token (as specified in grammar)
pos_in_stream: The index of the token in the text
line: The line of the token in the text (starting with 1)
column: The column of the token in the text (starting with 1)
end_line: The line where the token ends
end_column: The next column after the end of the token. For example,
if the token is a single character with a column value of 4,
end_column will be 5.
end_pos: the index where the token ends (basically pos_in_stream +
len(token))
"""
__slots__ = ('type', 'pos_in_stream', 'value', 'line', 'column', 'end_line', 'end_column', 'end_pos')

def __new__(cls, type_, value, pos_in_stream=None, line=None, column=None, end_line=None, end_column=None, end_pos=None):


Ładowanie…
Anuluj
Zapisz