Browse Source

Merge branch 'master' into 0.7d

tags/gm/2021-09-23T00Z/github.com--lark-parser-lark/0.6.6
Erez Shinan 6 years ago
parent
commit
989dd9c498
21 changed files with 81 additions and 67 deletions
  1. +1
    -0
      .gitignore
  2. +6
    -6
      docs/features.md
  3. +6
    -2
      docs/grammar.md
  4. +7
    -31
      docs/how_to_use.md
  5. +4
    -1
      docs/index.md
  6. +2
    -1
      docs/json_tutorial.md
  7. +4
    -4
      docs/philosophy.md
  8. +4
    -2
      docs/recipes.md
  9. +1
    -1
      docs/tree_construction.md
  10. +1
    -1
      examples/custom_lexer.py
  11. +1
    -1
      examples/python2.lark
  12. +2
    -0
      lark/exceptions.py
  13. +4
    -0
      lark/lark.py
  14. +1
    -1
      lark/load_grammar.py
  15. +5
    -1
      lark/parsers/grammar_analysis.py
  16. +7
    -0
      lark/reconstruct.py
  17. +5
    -1
      lark/utils.py
  18. +16
    -10
      lark/visitors.py
  19. +1
    -1
      tests/grammars/test.lark
  20. +2
    -2
      tests/test_parser.py
  21. +1
    -1
      tox.ini

+ 1
- 0
.gitignore View File

@@ -1,5 +1,6 @@
*.pyc
*.pyo
/.tox
/lark_parser.egg-info/**
tags
.vscode


+ 6
- 6
docs/features.md View File

@@ -1,7 +1,7 @@
# Features
# Main Features

- EBNF-inspired grammar, with extra features (See: [Grammar Reference](grammar.md))
- Builds a parse-tree (AST) automagically based on the grammar
- Builds a parse-tree (AST) automagically based on the grammar
- Stand-alone parser generator - create a small independent parser to embed in your project.
- Automatic line & column tracking
- Automatic terminal collision resolution
@@ -39,16 +39,17 @@ Lark extends the traditional YACC-based architecture with a *contextual lexer*,

The contextual lexer communicates with the parser, and uses the parser's lookahead prediction to narrow its choice of tokens. So at each point, the lexer only matches the subgroup of terminals that are legal at that parser state, instead of all of the terminals. It’s surprisingly effective at resolving common terminal collisions, and allows to parse languages that LALR(1) was previously incapable of parsing.

This is an improvement to LALR(1) that is unique to Lark.
This is an improvement to LALR(1) that is unique to Lark.

### CYK Parser

A [CYK parser](https://www.wikiwand.com/en/CYK_algorithm) can parse any context-free grammar at O(n^3*|G|).
A [CYK parser](https://www.wikiwand.com/en/CYK_algorithm) can parse any context-free grammar at O(n^3*|G|).

Its too slow to be practical for simple grammars, but it offers good performance for highly ambiguous grammars.

# Other features
# Extra features

- Import rules and tokens from other Lark grammars, for code reuse and modularity.
- Import grammars from Nearley.js

### Experimental features
@@ -59,4 +60,3 @@ Its too slow to be practical for simple grammars, but it offers good performance
- Grammar composition
- LALR(k) parser
- Full regexp-collision support using NFAs
- Automatically produce syntax-highlighters for grammars, for popular IDEs

+ 6
- 2
docs/grammar.md View File

@@ -109,6 +109,10 @@ four_words: word ~ 4

All occurrences of the terminal will be ignored, and won't be part of the parse.

Using the `%ignore` directive results in a cleaner grammar.

It's especially important for the LALR(1) algorithm, because adding whitespace (or comments, or other extranous elements) explicitly in the grammar, harms its predictive abilities, which are based on a lookahead of 1.

**Syntax:**
```html
%ignore <TERMINAL>
@@ -122,9 +126,9 @@ COMMENT: "#" /[^\n]/*
```
### %import

Allows to import terminals from lark grammars.
Allows to import terminals and rules from lark grammars.

Future versions will allow to import rules and macros.
When importing rules, all their dependencies will be imported into a namespace, to avoid collisions. It's not possible to override their dependencies (e.g. like you would when inheriting a class).

**Syntax:**
```html


+ 7
- 31
docs/how_to_use.md View File

@@ -10,7 +10,7 @@ This is the recommended process for working with Lark:

3. Try your grammar in Lark against each input sample. Make sure the resulting parse-trees make sense.

4. Use Lark's grammar features to [[shape the tree|Tree Construction]]: Get rid of superfluous rules by inlining them, and use aliases when specific cases need clarification.
4. Use Lark's grammar features to [[shape the tree|Tree Construction]]: Get rid of superfluous rules by inlining them, and use aliases when specific cases need clarification.

- You can perform steps 1-4 repeatedly, gradually growing your grammar to include more sentences.

@@ -18,39 +18,15 @@ This is the recommended process for working with Lark:

Of course, some specific use-cases may deviate from this process. Feel free to suggest these cases, and I'll add them to this page.

## Basic API Usage
## Getting started

For common use, you only need to know 3 classes: Lark, Tree, Transformer ([[Classes Reference]])
Browse the [Examples](https://github.com/lark-parser/lark/tree/master/examples) to find a template that suits your purposes.

Here is some mock usage of them. You can see a real example in the [[examples]]
Read the tutorials to get a better understanding of how everything works. (links in the [main page](/))

```python
from lark import Lark, Transformer

grammar = """start: rules and more rules

rule1: other rules AND TOKENS
| rule1 "+" rule2 -> add
| some value [maybe]
rule2: rule1 "-" (rule2 | "whatever")*

TOKEN1: "a literal"
TOKEN2: TOKEN1 "and literals"
"""

parser = Lark(grammar)
Use the [Cheatsheet (PDF)](lark_cheatsheet.pdf) for quick reference.

tree = parser.parse("some input string")

class MyTransformer(Transformer):
def rule1(self, matches):
return matches[0] + matches[1]

# I don't have to implement rule2 if I don't feel like it!

new_tree = MyTransformer().transform(tree)
```
Use the reference pages for more in-depth explanations. (links in the [main page](/)]

## LALR usage

@@ -64,7 +40,7 @@ logging.basicConfig(level=logging.DEBUG)
collision_grammar = '''
start: as as
as: a*
a: 'a'
a: "a"
'''
p = Lark(collision_grammar, parser='lalr', debug=True)
```

+ 4
- 1
docs/index.md View File

@@ -36,6 +36,8 @@ $ pip install lark-parser
* Tutorials
* [How to write a DSL](http://blog.erezsh.com/how-to-write-a-dsl-in-python-with-lark/) - Implements a toy LOGO-like language with an interpreter
* [How to write a JSON parser](json_tutorial.md)
* External
* [Program Synthesis is Possible](https://www.cs.cornell.edu/~asampson/blog/minisynth.html) - Creates a DSL for Z3
* Guides
* [How to use Lark](how_to_use.md)
* Reference
@@ -44,4 +46,5 @@ $ pip install lark-parser
* [Classes](classes.md)
* [Cheatsheet (PDF)](lark_cheatsheet.pdf)
* Discussion
* [Forum (Google Groups)](https://groups.google.com/forum/#!forum/lark-parser)
* [Gitter](https://gitter.im/lark-parser/Lobby)
* [Forum (Google Groups)](https://groups.google.com/forum/#!forum/lark-parser)

+ 2
- 1
docs/json_tutorial.md View File

@@ -79,7 +79,8 @@ By the way, if you're curious what these terminals signify, they are roughly equ

Lark will accept this, if you really want to complicate your life :)

(You can find the original definitions in [common.lark](/lark/grammars/common.lark).)
You can find the original definitions in [common.lark](/lark/grammars/common.lark).
They're don't strictly adhere to [json.org](https://json.org/) - but our purpose here is to accept json, not validate it.

Notice that terminals are written in UPPER-CASE, while rules are written in lower-case.
I'll touch more on the differences between rules and terminals later.


+ 4
- 4
docs/philosophy.md View File

@@ -27,7 +27,7 @@ In accordance with these principles, I arrived at the following design choices:

### 1. Separation of code and grammar

Grammars are the de-facto reference for your language, and for the structure of your parse-tree. For any non-trivial language, the conflation of code and grammar always turns out convoluted and difficult to read.
Grammars are the de-facto reference for your language, and for the structure of your parse-tree. For any non-trivial language, the conflation of code and grammar always turns out convoluted and difficult to read.

The grammars in Lark are EBNF-inspired, so they are especially easy to read & work with.

@@ -45,13 +45,13 @@ And anyway, every parse-tree can be replayed as a state-machine, so there is no

See this answer in more detail [here](https://github.com/erezsh/lark/issues/4).

You can skip the building the tree for LALR(1), by providing Lark with a transformer (see the [JSON example](https://github.com/erezsh/lark/blob/master/examples/json_parser.py)).
To improve performance, you can skip building the tree for LALR(1), by providing Lark with a transformer (see the [JSON example](https://github.com/erezsh/lark/blob/master/examples/json_parser.py)).

### 3. Earley is the default

The Earley algorithm can accept *any* context-free grammar you throw at it (i.e. any grammar you can write in EBNF, it can parse). That makes it extremely useful for beginners, who are not aware of the strange and arbitrary restrictions that LALR(1) places on its grammars.

As the users grow to understand the structure of their grammar, the scope of their target language and their performance requirements, they may choose to switch over to LALR(1) to gain a huge performance boost, possibly at the cost of some language features.
As the users grow to understand the structure of their grammar, the scope of their target language and their performance requirements, they may choose to switch over to LALR(1) to gain a huge performance boost, possibly at the cost of some language features.

In short, "Premature optimization is the root of all evil."

@@ -60,4 +60,4 @@ In short, "Premature optimization is the root of all evil."
- Automatically resolve terminal collisions whenever possible

- Automatically keep track of line & column numbers

+ 4
- 2
docs/recipes.md View File

@@ -22,6 +22,8 @@ It only works with the standard and contextual lexers.
from lark import Lark, Token

def tok_to_int(tok):
"Convert the value of `tok` from string to int, while maintaining line number & column."
# tok.type == 'INT'
return Token.new_borrow_pos(tok.type, int(tok), tok)

parser = Lark("""
@@ -54,7 +56,7 @@ parser = Lark("""
%import common (INT, WS)
%ignore COMMENT
%ignore WS
""", parser="lalr", lexer_callbacks={'COMMENT': comments.append})
""", parser="lalr", lexer_callbacks={'COMMENT': comments.append})

parser.parse("""
1 2 3 # hello
@@ -71,4 +73,4 @@ Prints out:
[Token(COMMENT, '# hello'), Token(COMMENT, '# world')]
```

*Note: We don't have to return a token, because comments are ignored*
*Note: We don't have to return a token, because comments are ignored*

+ 1
- 1
docs/tree_construction.md View File

@@ -126,4 +126,4 @@ Lark will parse "hello world" as:

start
greet
planet
planet

+ 1
- 1
examples/custom_lexer.py View File

@@ -49,7 +49,7 @@ def test():
res = ParseToDict().transform(tree)

print('-->')
print(res) # prints {'alice': [1, 27, 3], 'bob': [4], 'carrie': [], 'dan': [8, 6]}
print(res) # prints {'alice': [1, 27, 3], 'bob': [4], 'carrie': [], 'dan': [8, 6]}


if __name__ == '__main__':


+ 1
- 1
examples/python2.lark View File

@@ -162,7 +162,7 @@ IMAG_NUMBER: (_INT | FLOAT) ("j"|"J")


%ignore /[\t \f]+/ // WS
%ignore /\\[\t \f]*\r?\n/ // LINE_CONT
%ignore /\\[\t \f]*\r?\n/ // LINE_CONT
%ignore COMMENT
%declare _INDENT _DEDENT


+ 2
- 0
lark/exceptions.py View File

@@ -86,4 +86,6 @@ class UnexpectedToken(ParseError, UnexpectedInput):

super(UnexpectedToken, self).__init__(message.encode('utf-8'))

class VisitError(Exception):
pass
###}

+ 4
- 0
lark/lark.py View File

@@ -42,8 +42,12 @@ class LarkOptions(object):
cache_grammar - Cache the Lark grammar (Default: False)
postlex - Lexer post-processing (Default: None) Only works with the standard and contextual lexers.
start - The start symbol (Default: start)
<<<<<<< HEAD
profile - Measure run-time usage in Lark. Read results from the profiler proprety (Default: False)
priority - How priorities should be evaluated - auto, none, normal, invert (Default: auto)
=======
profile - Measure run-time usage in Lark. Read results from the profiler property (Default: False)
>>>>>>> master
propagate_positions - Propagates [line, column, end_line, end_column] attributes into all tree branches.
lexer_callbacks - Dictionary of callbacks for the lexer. May alter tokens during lexing. Use with caution.
maybe_placeholders - Experimental feature. Instead of omitting optional rules (i.e. rule?), replace them with None


+ 1
- 1
lark/load_grammar.py View File

@@ -549,7 +549,7 @@ def import_from_grammar_into_namespace(grammar, namespace, aliases):

imported_terms = dict(grammar.term_defs)
imported_rules = {n:(n,deepcopy(t),o) for n,t,o in grammar.rule_defs}
term_defs = []
rule_defs = []



+ 5
- 1
lark/parsers/grammar_analysis.py View File

@@ -1,3 +1,4 @@
from collections import Counter

from ..utils import bfs, fzset, classify
from ..exceptions import GrammarError
@@ -111,7 +112,10 @@ class GrammarAnalyzer(object):
rules = parser_conf.rules + [Rule(NonTerminal('$root'), [NonTerminal(parser_conf.start), Terminal('$END')])]
self.rules_by_origin = classify(rules, lambda r: r.origin)

assert len(rules) == len(set(rules))
if len(rules) != len(set(rules)):
duplicates = [item for item, count in Counter(rules).items() if count > 1]
raise GrammarError("Rules defined twice: %s" % ', '.join(str(i) for i in duplicates))

for r in rules:
for sym in r.expansion:
if not (sym.is_term or sym in self.rules_by_origin):


+ 7
- 0
lark/reconstruct.py View File

@@ -100,10 +100,17 @@ class Reconstructor:

for origin, rule_aliases in aliases.items():
for alias in rule_aliases:
<<<<<<< HEAD
yield Rule(origin, [Terminal(alias)], alias=MakeMatchTree(origin.name, [NonTerminal(alias)]))
yield Rule(origin, [Terminal(origin.name)], alias=MakeMatchTree(origin.name, [origin]))
=======
yield Rule(origin, [Terminal(alias)], MakeMatchTree(origin.name, [NonTerminal(alias)]))

yield Rule(origin, [Terminal(origin.name)], MakeMatchTree(origin.name, [origin]))

>>>>>>> master


def _match(self, term, token):


+ 5
- 1
lark/utils.py View File

@@ -57,12 +57,16 @@ from functools import wraps, partial
from contextlib import contextmanager

Str = type(u'')
try:
classtype = types.ClassType # Python2
except AttributeError:
classtype = type # Python3

def smart_decorator(f, create_decorator):
if isinstance(f, types.FunctionType):
return wraps(f)(create_decorator(f, True))

elif isinstance(f, (type, types.BuiltinFunctionType)):
elif isinstance(f, (classtype, type, types.BuiltinFunctionType)):
return wraps(f)(create_decorator(f, False))

elif isinstance(f, types.MethodType):


+ 16
- 10
lark/visitors.py View File

@@ -2,6 +2,7 @@ from functools import wraps

from .utils import smart_decorator
from .tree import Tree
from .exceptions import VisitError, GrammarError

###{standalone
from inspect import getmembers, getmro
@@ -28,16 +29,21 @@ class Transformer:
except AttributeError:
return self.__default__(tree.data, children, tree.meta)
else:
if getattr(f, 'meta', False):
return f(children, tree.meta)
elif getattr(f, 'inline', False):
return f(*children)
elif getattr(f, 'whole_tree', False):
if new_children is not None:
raise NotImplementedError("Doesn't work with the base Transformer class")
return f(tree)
else:
return f(children)
try:
if getattr(f, 'meta', False):
return f(children, tree.meta)
elif getattr(f, 'inline', False):
return f(*children)
elif getattr(f, 'whole_tree', False):
if new_children is not None:
raise NotImplementedError("Doesn't work with the base Transformer class")
return f(tree)
else:
return f(children)
except GrammarError:
raise
except Exception as e:
raise VisitError('Error trying to process rule "%s":\n\n%s' % (tree.data, e))

def _transform_children(self, children):
for c in children:


+ 1
- 1
tests/grammars/test.lark View File

@@ -1,3 +1,3 @@
%import common.NUMBER
%import common.WORD
%import common.WS
%import common.WS

+ 2
- 2
tests/test_parser.py View File

@@ -1311,8 +1311,8 @@ def _make_parser_test(LEXER, PARSER):
self.assertEqual(p.parse("bb").children, [None, 'b', None, None, 'b', None])
self.assertEqual(p.parse("abbc").children, ['a', 'b', None, None, 'b', 'c'])
self.assertEqual(p.parse("babbcabcb").children,
[None, 'b', None,
'a', 'b', None,
[None, 'b', None,
'a', 'b', None,
None, 'b', 'c',
'a', 'b', 'c',
None, 'b', None])


+ 1
- 1
tox.ini View File

@@ -21,4 +21,4 @@ recreate=True
commands=
git submodule sync -q
git submodule update --init
python -m tests
python -m tests {posargs}

Loading…
Cancel
Save