ply.html (4479:61d3ed46e373) ply.html (6498:e21e9ab5fad0)
1<html>
2<head>
3<title>PLY (Python Lex-Yacc)</title>
4</head>
5<body bgcolor="#ffffff">
6
7<h1>PLY (Python Lex-Yacc)</h1>
8
9<b>
10David M. Beazley <br>
11dave@dabeaz.com<br>
12</b>
13
14<p>
1<html>
2<head>
3<title>PLY (Python Lex-Yacc)</title>
4</head>
5<body bgcolor="#ffffff">
6
7<h1>PLY (Python Lex-Yacc)</h1>
8
9<b>
10David M. Beazley <br>
11dave@dabeaz.com<br>
12</b>
13
14<p>
15<b>PLY Version: 2.3</b>
15<b>PLY Version: 3.0</b>
16<p>
17
18<!-- INDEX -->
19<div class="sectiontoc">
20<ul>
16<p>
17
18<!-- INDEX -->
19<div class="sectiontoc">
20<ul>
21<li><a href="#ply_nn1">Preface and Requirements</a>
21<li><a href="#ply_nn1">Introduction</a>
22<li><a href="#ply_nn2">PLY Overview</a>
23<li><a href="#ply_nn3">Lex</a>
24<ul>
25<li><a href="#ply_nn4">Lex Example</a>
26<li><a href="#ply_nn5">The tokens list</a>
27<li><a href="#ply_nn6">Specification of tokens</a>
28<li><a href="#ply_nn7">Token values</a>
29<li><a href="#ply_nn8">Discarded tokens</a>
30<li><a href="#ply_nn9">Line numbers and positional information</a>
31<li><a href="#ply_nn10">Ignored characters</a>
32<li><a href="#ply_nn11">Literal characters</a>
33<li><a href="#ply_nn12">Error handling</a>
34<li><a href="#ply_nn13">Building and using the lexer</a>
35<li><a href="#ply_nn14">The @TOKEN decorator</a>
36<li><a href="#ply_nn15">Optimized mode</a>
37<li><a href="#ply_nn16">Debugging</a>
38<li><a href="#ply_nn17">Alternative specification of lexers</a>
39<li><a href="#ply_nn18">Maintaining state</a>
22<li><a href="#ply_nn1">Introduction</a>
23<li><a href="#ply_nn2">PLY Overview</a>
24<li><a href="#ply_nn3">Lex</a>
25<ul>
26<li><a href="#ply_nn4">Lex Example</a>
27<li><a href="#ply_nn5">The tokens list</a>
28<li><a href="#ply_nn6">Specification of tokens</a>
29<li><a href="#ply_nn7">Token values</a>
30<li><a href="#ply_nn8">Discarded tokens</a>
31<li><a href="#ply_nn9">Line numbers and positional information</a>
32<li><a href="#ply_nn10">Ignored characters</a>
33<li><a href="#ply_nn11">Literal characters</a>
34<li><a href="#ply_nn12">Error handling</a>
35<li><a href="#ply_nn13">Building and using the lexer</a>
36<li><a href="#ply_nn14">The @TOKEN decorator</a>
37<li><a href="#ply_nn15">Optimized mode</a>
38<li><a href="#ply_nn16">Debugging</a>
39<li><a href="#ply_nn17">Alternative specification of lexers</a>
40<li><a href="#ply_nn18">Maintaining state</a>
40<li><a href="#ply_nn19">Duplicating lexers</a>
41<li><a href="#ply_nn19">Lexer cloning</a>
41<li><a href="#ply_nn20">Internal lexer state</a>
42<li><a href="#ply_nn21">Conditional lexing and start conditions</a>
43<li><a href="#ply_nn21">Miscellaneous Issues</a>
44</ul>
45<li><a href="#ply_nn22">Parsing basics</a>
42<li><a href="#ply_nn20">Internal lexer state</a>
43<li><a href="#ply_nn21">Conditional lexing and start conditions</a>
44<li><a href="#ply_nn21">Miscellaneous Issues</a>
45</ul>
46<li><a href="#ply_nn22">Parsing basics</a>
46<li><a href="#ply_nn23">Yacc reference</a>
47
  • Yacc
  • 47<ul>
    48<li><a href="#ply_nn24">An example</a>
    49<li><a href="#ply_nn25">Combining Grammar Rule Functions</a>
    50<li><a href="#ply_nn26">Character Literals</a>
    51<li><a href="#ply_nn26">Empty Productions</a>
    52<li><a href="#ply_nn28">Changing the starting symbol</a>
    53<li><a href="#ply_nn27">Dealing With Ambiguous Grammars</a>
    54<li><a href="#ply_nn28">The parser.out file</a>
    55<li><a href="#ply_nn29">Syntax Error Handling</a>
    56<ul>
    57<li><a href="#ply_nn30">Recovery and resynchronization with error rules</a>
    58<li><a href="#ply_nn31">Panic mode recovery</a>
    48<ul>
    49<li><a href="#ply_nn24">An example</a>
    50<li><a href="#ply_nn25">Combining Grammar Rule Functions</a>
    51<li><a href="#ply_nn26">Character Literals</a>
    52<li><a href="#ply_nn26">Empty Productions</a>
    53<li><a href="#ply_nn28">Changing the starting symbol</a>
    54<li><a href="#ply_nn27">Dealing With Ambiguous Grammars</a>
    55<li><a href="#ply_nn28">The parser.out file</a>
    56<li><a href="#ply_nn29">Syntax Error Handling</a>
    57<ul>
    58<li><a href="#ply_nn30">Recovery and resynchronization with error rules</a>
    59<li><a href="#ply_nn31">Panic mode recovery</a>
    60<li><a href="#ply_nn35">Signaling an error from a production</a>
    59<li><a href="#ply_nn32">General comments on error handling</a>
    60</ul>
    61<li><a href="#ply_nn33">Line Number and Position Tracking</a>
    62<li><a href="#ply_nn34">AST Construction</a>
    63<li><a href="#ply_nn35">Embedded Actions</a>
    61<li><a href="#ply_nn32">General comments on error handling</a>
    62</ul>
    63<li><a href="#ply_nn33">Line Number and Position Tracking</a>
    64<li><a href="#ply_nn34">AST Construction</a>
    65<li><a href="#ply_nn35">Embedded Actions</a>
    64<li><a href="#ply_nn36">Yacc implementation notes</a>
    66<li><a href="#ply_nn36">Miscellaneous Yacc Notes</a>
    65</ul>
    67</ul>
    66<li><a href="#ply_nn37">Parser and Lexer State Management</a>
    68<li><a href="#ply_nn37">Multiple Parsers and Lexers</a>
    67<li><a href="#ply_nn38">Using Python's Optimized Mode</a>
    69<li><a href="#ply_nn38">Using Python's Optimized Mode</a>
    70<li><a href="#ply_nn44">Advanced Debugging</a>
    71<ul>
    72<li><a href="#ply_nn45">Debugging the lex() and yacc() commands</a>
    73<li><a href="#ply_nn46">Run-time Debugging</a>
    74</ul>
    68<li><a href="#ply_nn39">Where to go from here?</a>
    69</ul>
    70</div>
    71<!-- INDEX -->
    72
    73
    74
    75<li><a href="#ply_nn39">Where to go from here?</a>
    76</ul>
    77</div>
    78<!-- INDEX -->
    79
    80
    81
    82<H2><a name="ply_nn1"></a>1. Preface and Requirements</H2>
    75
    76
    83
    84
    85<p>
    86This document provides an overview of lexing and parsing with PLY.
    87Given the intrinsic complexity of parsing, I would strongly advise
    88that you read (or at least skim) this entire document before jumping
    89into a big development project with PLY.
    90</p>
    77
    91
    78<H2><a name="ply_nn1"></a>1. Introduction</H2>
    92<p>
    93PLY-3.0 is compatible with both Python 2 and Python 3. Be aware that
    94Python 3 support is new and has not been extensively tested (although
    95all of the examples and unit tests pass under Python 3.0). If you are
    96using Python 2, you should try to use Python 2.4 or newer. Although PLY
    97works with versions as far back as Python 2.2, some of its optional features
    98require more modern library modules.
    99</p>
    79
    100
    101<H2><a name="ply_nn1"></a>2. Introduction</H2>
    80
    102
    103
    81PLY is a pure-Python implementation of the popular compiler
    82construction tools lex and yacc. The main goal of PLY is to stay
    83fairly faithful to the way in which traditional lex/yacc tools work.
    84This includes supporting LALR(1) parsing as well as providing
    85extensive input validation, error reporting, and diagnostics. Thus,
    86if you've used yacc in another programming language, it should be
    87relatively straightforward to use PLY.
    88
    89<p>
    90Early versions of PLY were developed to support an Introduction to
    91Compilers Course I taught in 2001 at the University of Chicago. In this course,
    92students built a fully functional compiler for a simple Pascal-like
    93language. Their compiler, implemented entirely in Python, had to
    94include lexical analysis, parsing, type checking, type inference,
    95nested scoping, and code generation for the SPARC processor.
    96Approximately 30 different compiler implementations were completed in
    97this course. Most of PLY's interface and operation has been influenced by common
    104PLY is a pure-Python implementation of the popular compiler
    105construction tools lex and yacc. The main goal of PLY is to stay
    106fairly faithful to the way in which traditional lex/yacc tools work.
    107This includes supporting LALR(1) parsing as well as providing
    108extensive input validation, error reporting, and diagnostics. Thus,
    109if you've used yacc in another programming language, it should be
    110relatively straightforward to use PLY.
    111
    112<p>
    113Early versions of PLY were developed to support an Introduction to
    114Compilers Course I taught in 2001 at the University of Chicago. In this course,
    115students built a fully functional compiler for a simple Pascal-like
    116language. Their compiler, implemented entirely in Python, had to
    117include lexical analysis, parsing, type checking, type inference,
    118nested scoping, and code generation for the SPARC processor.
    119Approximately 30 different compiler implementations were completed in
    120this course. Most of PLY's interface and operation has been influenced by common
    98usability problems encountered by students.
    121usability problems encountered by students. Since 2001, PLY has
    122continued to be improved as feedback has been received from users.
    123PLY-3.0 represents a major refactoring of the original implementation
    124with an eye towards future enhancements.
    99
    100<p>
    101Since PLY was primarily developed as an instructional tool, you will
    102find it to be fairly picky about token and grammar rule
    103specification. In part, this
    104added formality is meant to catch common programming mistakes made by
    105novice users. However, advanced users will also find such features to
    106be useful when building complicated grammars for real programming

    --- 8 unchanged lines hidden (view full) ---

    115parsing theory, syntax directed translation, and the use of compiler
    116construction tools such as lex and yacc in other programming
    117languages. If you are unfamilar with these topics, you will probably
    118want to consult an introductory text such as "Compilers: Principles,
    119Techniques, and Tools", by Aho, Sethi, and Ullman. O'Reilly's "Lex
    120and Yacc" by John Levine may also be handy. In fact, the O'Reilly book can be
    121used as a reference for PLY as the concepts are virtually identical.
    122
    125
    126<p>
    127Since PLY was primarily developed as an instructional tool, you will
    128find it to be fairly picky about token and grammar rule
    129specification. In part, this
    130added formality is meant to catch common programming mistakes made by
    131novice users. However, advanced users will also find such features to
    132be useful when building complicated grammars for real programming

    --- 8 unchanged lines hidden (view full) ---

    141parsing theory, syntax directed translation, and the use of compiler
    142construction tools such as lex and yacc in other programming
    143languages. If you are unfamilar with these topics, you will probably
    144want to consult an introductory text such as "Compilers: Principles,
    145Techniques, and Tools", by Aho, Sethi, and Ullman. O'Reilly's "Lex
    146and Yacc" by John Levine may also be handy. In fact, the O'Reilly book can be
    147used as a reference for PLY as the concepts are virtually identical.
    148
    123<H2><a name="ply_nn2"></a>2. PLY Overview</H2>
    149<H2><a name="ply_nn2"></a>3. PLY Overview</H2>
    124
    125
    126PLY consists of two separate modules; <tt>lex.py</tt> and
    127<tt>yacc.py</tt>, both of which are found in a Python package
    128called <tt>ply</tt>. The <tt>lex.py</tt> module is used to break input text into a
    129collection of tokens specified by a collection of regular expression
    130rules. <tt>yacc.py</tt> is used to recognize language syntax that has
    131been specified in the form of a context free grammar. <tt>yacc.py</tt> uses LR parsing and generates its parsing tables

    --- 26 unchanged lines hidden (view full) ---

    158file, the specifications given to PLY <em>are</em> valid Python
    159programs. This means that there are no extra source files nor is
    160there a special compiler construction step (e.g., running yacc to
    161generate Python code for the compiler). Since the generation of the
    162parsing tables is relatively expensive, PLY caches the results and
    163saves them to a file. If no changes are detected in the input source,
    164the tables are read from the cache. Otherwise, they are regenerated.
    165
    150
    151
    152PLY consists of two separate modules; <tt>lex.py</tt> and
    153<tt>yacc.py</tt>, both of which are found in a Python package
    154called <tt>ply</tt>. The <tt>lex.py</tt> module is used to break input text into a
    155collection of tokens specified by a collection of regular expression
    156rules. <tt>yacc.py</tt> is used to recognize language syntax that has
    157been specified in the form of a context free grammar. <tt>yacc.py</tt> uses LR parsing and generates its parsing tables

    --- 26 unchanged lines hidden (view full) ---

    184file, the specifications given to PLY <em>are</em> valid Python
    185programs. This means that there are no extra source files nor is
    186there a special compiler construction step (e.g., running yacc to
    187generate Python code for the compiler). Since the generation of the
    188parsing tables is relatively expensive, PLY caches the results and
    189saves them to a file. If no changes are detected in the input source,
    190the tables are read from the cache. Otherwise, they are regenerated.
    191
    166<H2><a name="ply_nn3"></a>3. Lex</H2>
    192<H2><a name="ply_nn3"></a>4. Lex</H2>
    167
    168
    169<tt>lex.py</tt> is used to tokenize an input string. For example, suppose
    170you're writing a programming language and a user supplied the following input string:
    171
    172<blockquote>
    173<pre>
    174x = 3 + 42 * (s - t)

    --- 26 unchanged lines hidden (view full) ---

    201('LPAREN','('), ('ID','s'), ('MINUS','-'),
    202('ID','t'), ('RPAREN',')'
    203</pre>
    204</blockquote>
    205
    206The identification of tokens is typically done by writing a series of regular expression
    207rules. The next section shows how this is done using <tt>lex.py</tt>.
    208
    193
    194
    195<tt>lex.py</tt> is used to tokenize an input string. For example, suppose
    196you're writing a programming language and a user supplied the following input string:
    197
    198<blockquote>
    199<pre>
    200x = 3 + 42 * (s - t)

    --- 26 unchanged lines hidden (view full) ---

    227('LPAREN','('), ('ID','s'), ('MINUS','-'),
    228('ID','t'), ('RPAREN',')'
    229</pre>
    230</blockquote>
    231
    232The identification of tokens is typically done by writing a series of regular expression
    233rules. The next section shows how this is done using <tt>lex.py</tt>.
    234
    209<H3><a name="ply_nn4"></a>3.1 Lex Example</H3>
    235<H3><a name="ply_nn4"></a>4.1 Lex Example</H3>
    210
    211
    212The following example shows how <tt>lex.py</tt> is used to write a simple tokenizer.
    213
    214<blockquote>
    215<pre>
    216# ------------------------------------------------------------
    217# calclex.py

    --- 20 unchanged lines hidden (view full) ---

    238t_TIMES = r'\*'
    239t_DIVIDE = r'/'
    240t_LPAREN = r'\('
    241t_RPAREN = r'\)'
    242
    243# A regular expression rule with some action code
    244def t_NUMBER(t):
    245 r'\d+'
    236
    237
    238The following example shows how <tt>lex.py</tt> is used to write a simple tokenizer.
    239
    240<blockquote>
    241<pre>
    242# ------------------------------------------------------------
    243# calclex.py

    --- 20 unchanged lines hidden (view full) ---

    264t_TIMES = r'\*'
    265t_DIVIDE = r'/'
    266t_LPAREN = r'\('
    267t_RPAREN = r'\)'
    268
    269# A regular expression rule with some action code
    270def t_NUMBER(t):
    271 r'\d+'
    246 try:
    247 t.value = int(t.value)
    248 except ValueError:
    249 print "Line %d: Number %s is too large!" % (t.lineno,t.value)
    250 t.value = 0
    272 t.value = int(t.value)
    251 return t
    252
    253# Define a rule so we can track line numbers
    254def t_newline(t):
    255 r'\n+'
    256 t.lexer.lineno += len(t.value)
    257
    258# A string containing ignored characters (spaces and tabs)
    259t_ignore = ' \t'
    260
    261# Error handling rule
    262def t_error(t):
    263 print "Illegal character '%s'" % t.value[0]
    264 t.lexer.skip(1)
    265
    266# Build the lexer
    273 return t
    274
    275# Define a rule so we can track line numbers
    276def t_newline(t):
    277 r'\n+'
    278 t.lexer.lineno += len(t.value)
    279
    280# A string containing ignored characters (spaces and tabs)
    281t_ignore = ' \t'
    282
    283# Error handling rule
    284def t_error(t):
    285 print "Illegal character '%s'" % t.value[0]
    286 t.lexer.skip(1)
    287
    288# Build the lexer
    267lex.lex()
    289lexer = lex.lex()
    268
    269</pre>
    270</blockquote>
    290
    291</pre>
    292</blockquote>
    271To use the lexer, you first need to feed it some input text using its <tt>input()</tt> method. After that, repeated calls to <tt>token()</tt> produce tokens. The following code shows how this works:
    293To use the lexer, you first need to feed it some input text using
    294its <tt>input()</tt> method. After that, repeated calls
    295to <tt>token()</tt> produce tokens. The following code shows how this
    296works:
    272
    273<blockquote>
    274<pre>
    275
    276# Test it out
    277data = '''
    2783 + 4 * 10
    279 + -20 *2
    280'''
    281
    282# Give the lexer some input
    297
    298<blockquote>
    299<pre>
    300
    301# Test it out
    302data = '''
    3033 + 4 * 10
    304 + -20 *2
    305'''
    306
    307# Give the lexer some input
    283lex.input(data)
    308lexer.input(data)
    284
    285# Tokenize
    309
    310# Tokenize
    286while 1:
    287 tok = lex.token()
    311while True:
    312 tok = lexer.token()
    288 if not tok: break # No more input
    289 print tok
    290</pre>
    291</blockquote>
    292
    293When executed, the example will produce the following output:
    294
    295<blockquote>

    --- 7 unchanged lines hidden (view full) ---

    303LexToken(PLUS,'+',3,14)
    304LexToken(MINUS,'-',3,16)
    305LexToken(NUMBER,20,3,18)
    306LexToken(TIMES,'*',3,20)
    307LexToken(NUMBER,2,3,21)
    308</pre>
    309</blockquote>
    310
    313 if not tok: break # No more input
    314 print tok
    315</pre>
    316</blockquote>
    317
    318When executed, the example will produce the following output:
    319
    320<blockquote>

    --- 7 unchanged lines hidden (view full) ---

    328LexToken(PLUS,'+',3,14)
    329LexToken(MINUS,'-',3,16)
    330LexToken(NUMBER,20,3,18)
    331LexToken(TIMES,'*',3,20)
    332LexToken(NUMBER,2,3,21)
    333</pre>
    334</blockquote>
    335
    311The tokens returned by <tt>lex.token()</tt> are instances
    336Lexers also support the iteration protocol. So, you can write the above loop as follows:
    337
    338<blockquote>
    339<pre>
    340for tok in lexer:
    341 print tok
    342</pre>
    343</blockquote>
    344
    345The tokens returned by <tt>lexer.token()</tt> are instances
    312of <tt>LexToken</tt>. This object has
    313attributes <tt>tok.type</tt>, <tt>tok.value</tt>,
    314<tt>tok.lineno</tt>, and <tt>tok.lexpos</tt>. The following code shows an example of
    315accessing these attributes:
    316
    317<blockquote>
    318<pre>
    319# Tokenize
    346of <tt>LexToken</tt>. This object has
    347attributes <tt>tok.type</tt>, <tt>tok.value</tt>,
    348<tt>tok.lineno</tt>, and <tt>tok.lexpos</tt>. The following code shows an example of
    349accessing these attributes:
    350
    351<blockquote>
    352<pre>
    353# Tokenize
    320while 1:
    321 tok = lex.token()
    354while True:
    355 tok = lexer.token()
    322 if not tok: break # No more input
    323 print tok.type, tok.value, tok.line, tok.lexpos
    324</pre>
    325</blockquote>
    326
    327The <tt>tok.type</tt> and <tt>tok.value</tt> attributes contain the
    328type and value of the token itself.
    329<tt>tok.line</tt> and <tt>tok.lexpos</tt> contain information about
    330the location of the token. <tt>tok.lexpos</tt> is the index of the
    331token relative to the start of the input text.
    332
    356 if not tok: break # No more input
    357 print tok.type, tok.value, tok.line, tok.lexpos
    358</pre>
    359</blockquote>
    360
    361The <tt>tok.type</tt> and <tt>tok.value</tt> attributes contain the
    362type and value of the token itself.
    363<tt>tok.line</tt> and <tt>tok.lexpos</tt> contain information about
    364the location of the token. <tt>tok.lexpos</tt> is the index of the
    365token relative to the start of the input text.
    366
    333<H3><a name="ply_nn5"></a>3.2 The tokens list</H3>
    367<H3><a name="ply_nn5"></a>4.2 The tokens list</H3>
    334
    335
    336All lexers must provide a list <tt>tokens</tt> that defines all of the possible token
    337names that can be produced by the lexer. This list is always required
    338and is used to perform a variety of validation checks. The tokens list is also used by the
    339<tt>yacc.py</tt> module to identify terminals.
    340
    341<p>

    --- 8 unchanged lines hidden (view full) ---

    350 'TIMES',
    351 'DIVIDE',
    352 'LPAREN',
    353 'RPAREN',
    354)
    355</pre>
    356</blockquote>
    357
    368
    369
    370All lexers must provide a list <tt>tokens</tt> that defines all of the possible token
    371names that can be produced by the lexer. This list is always required
    372and is used to perform a variety of validation checks. The tokens list is also used by the
    373<tt>yacc.py</tt> module to identify terminals.
    374
    375<p>

    --- 8 unchanged lines hidden (view full) ---

    384 'TIMES',
    385 'DIVIDE',
    386 'LPAREN',
    387 'RPAREN',
    388)
    389</pre>
    390</blockquote>
    391
    358<H3><a name="ply_nn6"></a>3.3 Specification of tokens</H3>
    392<H3><a name="ply_nn6"></a>4.3 Specification of tokens</H3>
    359
    360
    361Each token is specified by writing a regular expression rule. Each of these rules are
    362are defined by making declarations with a special prefix <tt>t_</tt> to indicate that it
    363defines a token. For simple tokens, the regular expression can
    364be specified as strings such as this (note: Python raw strings are used since they are the
    365most convenient way to write regular expression strings):
    366

    --- 7 unchanged lines hidden (view full) ---

    374names supplied in <tt>tokens</tt>. If some kind of action needs to be performed,
    375a token rule can be specified as a function. For example, this rule matches numbers and
    376converts the string into a Python integer.
    377
    378<blockquote>
    379<pre>
    380def t_NUMBER(t):
    381 r'\d+'
    393
    394
    395Each token is specified by writing a regular expression rule. Each of these rules are
    396are defined by making declarations with a special prefix <tt>t_</tt> to indicate that it
    397defines a token. For simple tokens, the regular expression can
    398be specified as strings such as this (note: Python raw strings are used since they are the
    399most convenient way to write regular expression strings):
    400

    --- 7 unchanged lines hidden (view full) ---

    408names supplied in <tt>tokens</tt>. If some kind of action needs to be performed,
    409a token rule can be specified as a function. For example, this rule matches numbers and
    410converts the string into a Python integer.
    411
    412<blockquote>
    413<pre>
    414def t_NUMBER(t):
    415 r'\d+'
    382 try:
    383 t.value = int(t.value)
    384 except ValueError:
    385 print "Number %s is too large!" % t.value
    386 t.value = 0
    416 t.value = int(t.value)
    387 return t
    388</pre>
    389</blockquote>
    390
    391When a function is used, the regular expression rule is specified in the function documentation string.
    392The function always takes a single argument which is an instance of
    393<tt>LexToken</tt>. This object has attributes of <tt>t.type</tt> which is the token type (as a string),
    394<tt>t.value</tt> which is the lexeme (the actual text matched), <tt>t.lineno</tt> which is the current line number, and <tt>t.lexpos</tt> which

    --- 14 unchanged lines hidden (view full) ---

    409</ol>
    410<p>
    411Without this ordering, it can be difficult to correctly match certain types of tokens. For example, if you
    412wanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first. By sorting regular
    413expressions in order of decreasing length, this problem is solved for rules defined as strings. For functions,
    414the order can be explicitly controlled since rules appearing first are checked first.
    415
    416<p>
    417 return t
    418</pre>
    419</blockquote>
    420
    421When a function is used, the regular expression rule is specified in the function documentation string.
    422The function always takes a single argument which is an instance of
    423<tt>LexToken</tt>. This object has attributes of <tt>t.type</tt> which is the token type (as a string),
    424<tt>t.value</tt> which is the lexeme (the actual text matched), <tt>t.lineno</tt> which is the current line number, and <tt>t.lexpos</tt> which

    --- 14 unchanged lines hidden (view full) ---

    439</ol>
    440<p>
    441Without this ordering, it can be difficult to correctly match certain types of tokens. For example, if you
    442wanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first. By sorting regular
    443expressions in order of decreasing length, this problem is solved for rules defined as strings. For functions,
    444the order can be explicitly controlled since rules appearing first are checked first.
    445
    446<p>
    417To handle reserved words, it is usually easier to just match an identifier and do a special name lookup in a function
    418like this:
    447To handle reserved words, you should write a single rule to match an
    448identifier and do a special name lookup in a function like this:
    419
    420<blockquote>
    421<pre>
    422reserved = {
    423 'if' : 'IF',
    424 'then' : 'THEN',
    425 'else' : 'ELSE',
    426 'while' : 'WHILE',
    427 ...
    428}
    429
    449
    450<blockquote>
    451<pre>
    452reserved = {
    453 'if' : 'IF',
    454 'then' : 'THEN',
    455 'else' : 'ELSE',
    456 'while' : 'WHILE',
    457 ...
    458}
    459
    460tokens = ['LPAREN','RPAREN',...,'ID'] + list(reserved.values())
    461
    430def t_ID(t):
    431 r'[a-zA-Z_][a-zA-Z_0-9]*'
    432 t.type = reserved.get(t.value,'ID') # Check for reserved words
    433 return t
    434</pre>
    435</blockquote>
    436
    437This approach greatly reduces the number of regular expression rules and is likely to make things a little faster.

    --- 6 unchanged lines hidden (view full) ---

    444t_FOR = r'for'
    445t_PRINT = r'print'
    446</pre>
    447</blockquote>
    448
    449those rules will be triggered for identifiers that include those words as a prefix such as "forget" or "printed". This is probably not
    450what you want.
    451
    462def t_ID(t):
    463 r'[a-zA-Z_][a-zA-Z_0-9]*'
    464 t.type = reserved.get(t.value,'ID') # Check for reserved words
    465 return t
    466</pre>
    467</blockquote>
    468
    469This approach greatly reduces the number of regular expression rules and is likely to make things a little faster.

    --- 6 unchanged lines hidden (view full) ---

    476t_FOR = r'for'
    477t_PRINT = r'print'
    478</pre>
    479</blockquote>
    480
    481those rules will be triggered for identifiers that include those words as a prefix such as "forget" or "printed". This is probably not
    482what you want.
    483
    452<H3><a name="ply_nn7"></a>3.4 Token values</H3>
    484<H3><a name="ply_nn7"></a>4.4 Token values</H3>
    453
    454
    455When tokens are returned by lex, they have a value that is stored in the <tt>value</tt> attribute. Normally, the value is the text
    456that was matched. However, the value can be assigned to any Python object. For instance, when lexing identifiers, you may
    457want to return both the identifier name and information from some sort of symbol table. To do this, you might write a rule like this:
    458
    459<blockquote>
    460<pre>
    461def t_ID(t):
    462 ...
    463 # Look up symbol table information and return a tuple
    464 t.value = (t.value, symbol_lookup(t.value))
    465 ...
    466 return t
    467</pre>
    468</blockquote>
    469
    470It is important to note that storing data in other attribute names is <em>not</em> recommended. The <tt>yacc.py</tt> module only exposes the
    485
    486
    487When tokens are returned by lex, they have a value that is stored in the <tt>value</tt> attribute. Normally, the value is the text
    488that was matched. However, the value can be assigned to any Python object. For instance, when lexing identifiers, you may
    489want to return both the identifier name and information from some sort of symbol table. To do this, you might write a rule like this:
    490
    491<blockquote>
    492<pre>
    493def t_ID(t):
    494 ...
    495 # Look up symbol table information and return a tuple
    496 t.value = (t.value, symbol_lookup(t.value))
    497 ...
    498 return t
    499</pre>
    500</blockquote>
    501
    502It is important to note that storing data in other attribute names is <em>not</em> recommended. The <tt>yacc.py</tt> module only exposes the
    471contents of the value attribute. Thus, accessing other attributes may be unnecessarily awkward.
    503contents of the <tt>value</tt> attribute. Thus, accessing other attributes may be unnecessarily awkward. If you
    504need to store multiple values on a token, assign a tuple, dictionary, or instance to <tt>value</tt>.
    472
    505
    473<H3><a name="ply_nn8"></a>3.5 Discarded tokens</H3>
    506<H3><a name="ply_nn8"></a>4.5 Discarded tokens</H3>
    474
    475
    476To discard a token, such as a comment, simply define a token rule that returns no value. For example:
    477
    478<blockquote>
    479<pre>
    480def t_COMMENT(t):
    481 r'\#.*'

    --- 9 unchanged lines hidden (view full) ---

    491t_ignore_COMMENT = r'\#.*'
    492</pre>
    493</blockquote>
    494
    495Be advised that if you are ignoring many different kinds of text, you may still want to use functions since these provide more precise
    496control over the order in which regular expressions are matched (i.e., functions are matched in order of specification whereas strings are
    497sorted by regular expression length).
    498
    507
    508
    509To discard a token, such as a comment, simply define a token rule that returns no value. For example:
    510
    511<blockquote>
    512<pre>
    513def t_COMMENT(t):
    514 r'\#.*'

    --- 9 unchanged lines hidden (view full) ---

    524t_ignore_COMMENT = r'\#.*'
    525</pre>
    526</blockquote>
    527
    528Be advised that if you are ignoring many different kinds of text, you may still want to use functions since these provide more precise
    529control over the order in which regular expressions are matched (i.e., functions are matched in order of specification whereas strings are
    530sorted by regular expression length).
    531
    499<H3><a name="ply_nn9"></a>3.6 Line numbers and positional information</H3>
    532<H3><a name="ply_nn9"></a>4.6 Line numbers and positional information</H3>
    500
    501
    502<p>By default, <tt>lex.py</tt> knows nothing about line numbers. This is because <tt>lex.py</tt> doesn't know anything
    503about what constitutes a "line" of input (e.g., the newline character or even if the input is textual data).
    504To update this information, you need to write a special rule. In the example, the <tt>t_newline()</tt> rule shows how to do this.
    505
    506<blockquote>
    507<pre>

    --- 12 unchanged lines hidden (view full) ---

    520column information as a separate step. For instance, just count backwards until you reach a newline.
    521
    522<blockquote>
    523<pre>
    524# Compute column.
    525# input is the input text string
    526# token is a token instance
    527def find_column(input,token):
    533
    534
    535<p>By default, <tt>lex.py</tt> knows nothing about line numbers. This is because <tt>lex.py</tt> doesn't know anything
    536about what constitutes a "line" of input (e.g., the newline character or even if the input is textual data).
    537To update this information, you need to write a special rule. In the example, the <tt>t_newline()</tt> rule shows how to do this.
    538
    539<blockquote>
    540<pre>

    --- 12 unchanged lines hidden (view full) ---

    553column information as a separate step. For instance, just count backwards until you reach a newline.
    554
    555<blockquote>
    556<pre>
    557# Compute column.
    558# input is the input text string
    559# token is a token instance
    560def find_column(input,token):
    528 i = token.lexpos
    529 while i > 0:
    530 if input[i] == '\n': break
    531 i -= 1
    532 column = (token.lexpos - i)+1
    561 last_cr = input.rfind('\n',0,token.lexpos)
    562 if last_cr < 0:
    563 last_cr = 0
    564 column = (token.lexpos - last_cr) + 1
    533 return column
    534</pre>
    535</blockquote>
    536
    537Since column information is often only useful in the context of error handling, calculating the column
    538position can be performed when needed as opposed to doing it for each token.
    539
    565 return column
    566</pre>
    567</blockquote>
    568
    569Since column information is often only useful in the context of error handling, calculating the column
    570position can be performed when needed as opposed to doing it for each token.
    571
    540<H3><a name="ply_nn10"></a>3.7 Ignored characters</H3>
    572<H3><a name="ply_nn10"></a>4.7 Ignored characters</H3>
    541
    542
    543<p>
    544The special <tt>t_ignore</tt> rule is reserved by <tt>lex.py</tt> for characters
    545that should be completely ignored in the input stream.
    546Usually this is used to skip over whitespace and other non-essential characters.
    547Although it is possible to define a regular expression rule for whitespace in a manner
    548similar to <tt>t_newline()</tt>, the use of <tt>t_ignore</tt> provides substantially better
    549lexing performance because it is handled as a special case and is checked in a much
    550more efficient manner than the normal regular expression rules.
    551
    573
    574
    575<p>
    576The special <tt>t_ignore</tt> rule is reserved by <tt>lex.py</tt> for characters
    577that should be completely ignored in the input stream.
    578Usually this is used to skip over whitespace and other non-essential characters.
    579Although it is possible to define a regular expression rule for whitespace in a manner
    580similar to <tt>t_newline()</tt>, the use of <tt>t_ignore</tt> provides substantially better
    581lexing performance because it is handled as a special case and is checked in a much
    582more efficient manner than the normal regular expression rules.
    583
    552<H3><a name="ply_nn11"></a>3.8 Literal characters</H3>
    584<H3><a name="ply_nn11"></a>4.8 Literal characters</H3>
    553
    554
    555<p>
    556Literal characters can be specified by defining a variable <tt>literals</tt> in your lexing module. For example:
    557
    558<blockquote>
    559<pre>
    560literals = [ '+','-','*','/' ]

    --- 9 unchanged lines hidden (view full) ---

    570</blockquote>
    571
    572A literal character is simply a single character that is returned "as is" when encountered by the lexer. Literals are checked
    573after all of the defined regular expression rules. Thus, if a rule starts with one of the literal characters, it will always
    574take precedence.
    575<p>
    576When a literal token is returned, both its <tt>type</tt> and <tt>value</tt> attributes are set to the character itself. For example, <tt>'+'</tt>.
    577
    585
    586
    587<p>
    588Literal characters can be specified by defining a variable <tt>literals</tt> in your lexing module. For example:
    589
    590<blockquote>
    591<pre>
    592literals = [ '+','-','*','/' ]

    --- 9 unchanged lines hidden (view full) ---

    602</blockquote>
    603
    604A literal character is simply a single character that is returned "as is" when encountered by the lexer. Literals are checked
    605after all of the defined regular expression rules. Thus, if a rule starts with one of the literal characters, it will always
    606take precedence.
    607<p>
    608When a literal token is returned, both its <tt>type</tt> and <tt>value</tt> attributes are set to the character itself. For example, <tt>'+'</tt>.
    609
    578<H3><a name="ply_nn12"></a>3.9 Error handling</H3>
    610<H3><a name="ply_nn12"></a>4.9 Error handling</H3>
    579
    580
    581<p>
    582Finally, the <tt>t_error()</tt>
    583function is used to handle lexing errors that occur when illegal
    584characters are detected. In this case, the <tt>t.value</tt> attribute contains the
    585rest of the input string that has not been tokenized. In the example, the error function
    586was defined as follows:

    --- 4 unchanged lines hidden (view full) ---

    591def t_error(t):
    592 print "Illegal character '%s'" % t.value[0]
    593 t.lexer.skip(1)
    594</pre>
    595</blockquote>
    596
    597In this case, we simply print the offending character and skip ahead one character by calling <tt>t.lexer.skip(1)</tt>.
    598
    611
    612
    613<p>
    614Finally, the <tt>t_error()</tt>
    615function is used to handle lexing errors that occur when illegal
    616characters are detected. In this case, the <tt>t.value</tt> attribute contains the
    617rest of the input string that has not been tokenized. In the example, the error function
    618was defined as follows:

    --- 4 unchanged lines hidden (view full) ---

    623def t_error(t):
    624 print "Illegal character '%s'" % t.value[0]
    625 t.lexer.skip(1)
    626</pre>
    627</blockquote>
    628
    629In this case, we simply print the offending character and skip ahead one character by calling <tt>t.lexer.skip(1)</tt>.
    630
    599<H3><a name="ply_nn13"></a>3.10 Building and using the lexer</H3>
    631<H3><a name="ply_nn13"></a>4.10 Building and using the lexer</H3>
    600
    601
    602<p>
    603To build the lexer, the function <tt>lex.lex()</tt> is used. This function
    604uses Python reflection (or introspection) to read the the regular expression rules
    632
    633
    634<p>
    635To build the lexer, the function <tt>lex.lex()</tt> is used. This function
    636uses Python reflection (or introspection) to read the the regular expression rules
    605out of the calling context and build the lexer. Once the lexer has been built, two functions can
    637out of the calling context and build the lexer. Once the lexer has been built, two methods can
    606be used to control the lexer.
    607
    608<ul>
    638be used to control the lexer.
    639
    640<ul>
    609
  • lex.input(data). Reset the lexer and store a new input string.
    610
  • lex.token(). Return the next token. Returns a special LexToken instance on success or
  • 641<li><tt>lexer.input(data)</tt>. Reset the lexer and store a new input string.
    642<li><tt>lexer.token()</tt>. Return the next token. Returns a special <tt>LexToken</tt> instance on success or
    611None if the end of the input text has been reached.
    612</ul>
    613
    643None if the end of the input text has been reached.
    644</ul>
    645
    614If desired, the lexer can also be used as an object. The <tt>lex()</tt> returns a <tt>Lexer</tt> object that
    615can be used for this purpose. For example:
    646The preferred way to use PLY is to invoke the above methods directly on the lexer object returned by the
    647<tt>lex()</tt> function. The legacy interface to PLY involves module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt>.
    648For example:
    616
    617<blockquote>
    618<pre>
    649
    650<blockquote>
    651<pre>
    619lexer = lex.lex()
    620lexer.input(sometext)
    652lex.lex()
    653lex.input(sometext)
    621while 1:
    654while 1:
    622 tok = lexer.token()
    655 tok = lex.token()
    623 if not tok: break
    624 print tok
    625</pre>
    626</blockquote>
    627
    628<p>
    656 if not tok: break
    657 print tok
    658</pre>
    659</blockquote>
    660
    661<p>
    629This latter technique should be used if you intend to use multiple lexers in your application. Simply define each
    630lexer in its own module and use the object returned by <tt>lex()</tt> as appropriate.
    662In this example, the module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt> are bound to the <tt>input()</tt>
    663and <tt>token()</tt> methods of the last lexer created by the lex module. This interface may go away at some point so
    664it's probably best not to use it.
    631
    665
    632<p>
    633Note: The global functions <tt>lex.input()</tt> and <tt>lex.token()</tt> are bound to the <tt>input()</tt>
    634and <tt>token()</tt> methods of the last lexer created by the lex module.
    666<H3><a name="ply_nn14"></a>4.11 The @TOKEN decorator</H3>
    635
    667
    636<H3><a name="ply_nn14"></a>3.11 The @TOKEN decorator</H3>
    637
    668
    638
    639In some applications, you may want to define build tokens from as a series of
    640more complex regular expression rules. For example:
    641
    642<blockquote>
    643<pre>
    644digit = r'([0-9])'
    645nondigit = r'([_A-Za-z])'
    646identifier = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'

    --- 28 unchanged lines hidden (view full) ---

    675
    676t_ID.__doc__ = identifier
    677</pre>
    678</blockquote>
    679
    680<b>NOTE:</b> Use of <tt>@TOKEN</tt> requires Python-2.4 or newer. If you're concerned about backwards compatibility with older
    681versions of Python, use the alternative approach of setting the docstring directly.
    682
    669In some applications, you may want to define build tokens from as a series of
    670more complex regular expression rules. For example:
    671
    672<blockquote>
    673<pre>
    674digit = r'([0-9])'
    675nondigit = r'([_A-Za-z])'
    676identifier = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'

    --- 28 unchanged lines hidden (view full) ---

    705
    706t_ID.__doc__ = identifier
    707</pre>
    708</blockquote>
    709
    710<b>NOTE:</b> Use of <tt>@TOKEN</tt> requires Python-2.4 or newer. If you're concerned about backwards compatibility with older
    711versions of Python, use the alternative approach of setting the docstring directly.
    712
    683<H3><a name="ply_nn15"></a>3.12 Optimized mode</H3>
    713<H3><a name="ply_nn15"></a>4.12 Optimized mode</H3>
    684
    685
    686For improved performance, it may be desirable to use Python's
    687optimized mode (e.g., running Python with the <tt>-O</tt>
    688option). However, doing so causes Python to ignore documentation
    689strings. This presents special problems for <tt>lex.py</tt>. To
    690handle this case, you can create your lexer using
    691the <tt>optimize</tt> option as follows:

    --- 20 unchanged lines hidden (view full) ---

    712<pre>
    713lexer = lex.lex(optimize=1,lextab="footab")
    714</pre>
    715</blockquote>
    716
    717When running in optimized mode, it is important to note that lex disables most error checking. Thus, this is really only recommended
    718if you're sure everything is working correctly and you're ready to start releasing production code.
    719
    714
    715
    716For improved performance, it may be desirable to use Python's
    717optimized mode (e.g., running Python with the <tt>-O</tt>
    718option). However, doing so causes Python to ignore documentation
    719strings. This presents special problems for <tt>lex.py</tt>. To
    720handle this case, you can create your lexer using
    721the <tt>optimize</tt> option as follows:

    --- 20 unchanged lines hidden (view full) ---

    742<pre>
    743lexer = lex.lex(optimize=1,lextab="footab")
    744</pre>
    745</blockquote>
    746
    747When running in optimized mode, it is important to note that lex disables most error checking. Thus, this is really only recommended
    748if you're sure everything is working correctly and you're ready to start releasing production code.
    749
    720<H3><a name="ply_nn16"></a>3.13 Debugging</H3>
    750<H3><a name="ply_nn16"></a>4.13 Debugging</H3>
    721
    722
    723For the purpose of debugging, you can run <tt>lex()</tt> in a debugging mode as follows:
    724
    725<blockquote>
    726<pre>
    727lexer = lex.lex(debug=1)
    728</pre>
    729</blockquote>
    730
    751
    752
    753For the purpose of debugging, you can run <tt>lex()</tt> in a debugging mode as follows:
    754
    755<blockquote>
    756<pre>
    757lexer = lex.lex(debug=1)
    758</pre>
    759</blockquote>
    760
    731This will result in a large amount of debugging information to be printed including all of the added rules and the master
    732regular expressions.
    761<p>
    762This will produce various sorts of debugging information including all of the added rules,
    763the master regular expressions used by the lexer, and tokens generating during lexing.
    764</p>
    733
    765
    766<p>
    734In addition, <tt>lex.py</tt> comes with a simple main function which
    735will either tokenize input read from standard input or from a file specified
    736on the command line. To use it, simply put this in your lexer:
    767In addition, <tt>lex.py</tt> comes with a simple main function which
    768will either tokenize input read from standard input or from a file specified
    769on the command line. To use it, simply put this in your lexer:
    770</p>
    737
    738<blockquote>
    739<pre>
    740if __name__ == '__main__':
    741 lex.runmain()
    742</pre>
    743</blockquote>
    744
    771
    772<blockquote>
    773<pre>
    774if __name__ == '__main__':
    775 lex.runmain()
    776</pre>
    777</blockquote>
    778
    745<H3><a name="ply_nn17"></a>3.14 Alternative specification of lexers</H3>
    779Please refer to the "Debugging" section near the end for some more advanced details
    780of debugging.
    746
    781
    782<H3><a name="ply_nn17"></a>4.14 Alternative specification of lexers</H3>
    747
    783
    784
    748As shown in the example, lexers are specified all within one Python module. If you want to
    749put token rules in a different module from the one in which you invoke <tt>lex()</tt>, use the
    750<tt>module</tt> keyword argument.
    751
    752<p>
    753For example, you might have a dedicated module that just contains
    754the token rules:
    755

    --- 19 unchanged lines hidden (view full) ---

    775t_TIMES = r'\*'
    776t_DIVIDE = r'/'
    777t_LPAREN = r'\('
    778t_RPAREN = r'\)'
    779
    780# A regular expression rule with some action code
    781def t_NUMBER(t):
    782 r'\d+'
    785As shown in the example, lexers are specified all within one Python module. If you want to
    786put token rules in a different module from the one in which you invoke <tt>lex()</tt>, use the
    787<tt>module</tt> keyword argument.
    788
    789<p>
    790For example, you might have a dedicated module that just contains
    791the token rules:
    792

    --- 19 unchanged lines hidden (view full) ---

    812t_TIMES = r'\*'
    813t_DIVIDE = r'/'
    814t_LPAREN = r'\('
    815t_RPAREN = r'\)'
    816
    817# A regular expression rule with some action code
    818def t_NUMBER(t):
    819 r'\d+'
    783 try:
    784 t.value = int(t.value)
    785 except ValueError:
    786 print "Line %d: Number %s is too large!" % (t.lineno,t.value)
    787 t.value = 0
    820 t.value = int(t.value)
    788 return t
    789
    790# Define a rule so we can track line numbers
    791def t_newline(t):
    792 r'\n+'
    793 t.lexer.lineno += len(t.value)
    794
    795# A string containing ignored characters (spaces and tabs)

    --- 20 unchanged lines hidden (view full) ---

    816>>> lexer.token()
    817LexToken(NUMBER,4,1,4)
    818>>> lexer.token()
    819None
    820>>>
    821</pre>
    822</blockquote>
    823
    821 return t
    822
    823# Define a rule so we can track line numbers
    824def t_newline(t):
    825 r'\n+'
    826 t.lexer.lineno += len(t.value)
    827
    828# A string containing ignored characters (spaces and tabs)

    --- 20 unchanged lines hidden (view full) ---

    849>>> lexer.token()
    850LexToken(NUMBER,4,1,4)
    851>>> lexer.token()
    852None
    853>>>
    854</pre>
    855</blockquote>
    856
    824The <tt>object</tt> option can be used to define lexers as a class instead of a module. For example:
    857The <tt>module</tt> option can also be used to define lexers from instances of a class. For example:
    825
    826<blockquote>
    827<pre>
    828import ply.lex as lex
    829
    830class MyLexer:
    831 # List of token names. This is always required
    832 tokens = (

    --- 13 unchanged lines hidden (view full) ---

    846 t_DIVIDE = r'/'
    847 t_LPAREN = r'\('
    848 t_RPAREN = r'\)'
    849
    850 # A regular expression rule with some action code
    851 # Note addition of self parameter since we're in a class
    852 def t_NUMBER(self,t):
    853 r'\d+'
    858
    859<blockquote>
    860<pre>
    861import ply.lex as lex
    862
    863class MyLexer:
    864 # List of token names. This is always required
    865 tokens = (

    --- 13 unchanged lines hidden (view full) ---

    879 t_DIVIDE = r'/'
    880 t_LPAREN = r'\('
    881 t_RPAREN = r'\)'
    882
    883 # A regular expression rule with some action code
    884 # Note addition of self parameter since we're in a class
    885 def t_NUMBER(self,t):
    886 r'\d+'
    854 try:
    855 t.value = int(t.value)
    856 except ValueError:
    857 print "Line %d: Number %s is too large!" % (t.lineno,t.value)
    858 t.value = 0
    887 t.value = int(t.value)
    859 return t
    860
    861 # Define a rule so we can track line numbers
    862 def t_newline(self,t):
    863 r'\n+'
    864 t.lexer.lineno += len(t.value)
    865
    866 # A string containing ignored characters (spaces and tabs)
    867 t_ignore = ' \t'
    868
    869 # Error handling rule
    870 def t_error(self,t):
    871 print "Illegal character '%s'" % t.value[0]
    872 t.lexer.skip(1)
    873
    874 <b># Build the lexer
    875 def build(self,**kwargs):
    888 return t
    889
    890 # Define a rule so we can track line numbers
    891 def t_newline(self,t):
    892 r'\n+'
    893 t.lexer.lineno += len(t.value)
    894
    895 # A string containing ignored characters (spaces and tabs)
    896 t_ignore = ' \t'
    897
    898 # Error handling rule
    899 def t_error(self,t):
    900 print "Illegal character '%s'" % t.value[0]
    901 t.lexer.skip(1)
    902
    903 <b># Build the lexer
    904 def build(self,**kwargs):
    876 self.lexer = lex.lex(object=self, **kwargs)</b>
    905 self.lexer = lex.lex(module=self, **kwargs)</b>
    877
    878 # Test it output
    879 def test(self,data):
    880 self.lexer.input(data)
    906
    907 # Test it output
    908 def test(self,data):
    909 self.lexer.input(data)
    881 while 1:
    910 while True:
    882 tok = lexer.token()
    883 if not tok: break
    884 print tok
    885
    886# Build the lexer and try it out
    887m = MyLexer()
    888m.build() # Build the lexer
    889m.test("3 + 4") # Test it
    890</pre>
    891</blockquote>
    892
    911 tok = lexer.token()
    912 if not tok: break
    913 print tok
    914
    915# Build the lexer and try it out
    916m = MyLexer()
    917m.build() # Build the lexer
    918m.test("3 + 4") # Test it
    919</pre>
    920</blockquote>
    921
    893For reasons that are subtle, you should <em>NOT</em> invoke <tt>lex.lex()</tt> inside the <tt>__init__()</tt> method of your class. If you
    894do, it may cause bizarre behavior if someone tries to duplicate a lexer object. Keep reading.
    895
    922
    896<H3><a name="ply_nn18"></a>3.15 Maintaining state</H3>
    923When building a lexer from class, <em>you should construct the lexer from
    924an instance of the class</em>, not the class object itself. This is because
    925PLY only works properly if the lexer actions are defined by bound-methods.
    897
    926
    927<p>
    928When using the <tt>module</tt> option to <tt>lex()</tt>, PLY collects symbols
    929from the underlying object using the <tt>dir()</tt> function. There is no
    930direct access to the <tt>__dict__</tt> attribute of the object supplied as a
    931module value.
    898
    932
    899In your lexer, you may want to maintain a variety of state information. This might include mode settings, symbol tables, and other details. There are a few
    900different ways to handle this situation. First, you could just keep some global variables:
    933<P>
    934Finally, if you want to keep things nicely encapsulated, but don't want to use a
    935full-fledged class definition, lexers can be defined using closures. For example:
    901
    902<blockquote>
    903<pre>
    936
    937<blockquote>
    938<pre>
    939import ply.lex as lex
    940
    941# List of token names. This is always required
    942tokens = (
    943 'NUMBER',
    944 'PLUS',
    945 'MINUS',
    946 'TIMES',
    947 'DIVIDE',
    948 'LPAREN',
    949 'RPAREN',
    950)
    951
    952def MyLexer():
    953 # Regular expression rules for simple tokens
    954 t_PLUS = r'\+'
    955 t_MINUS = r'-'
    956 t_TIMES = r'\*'
    957 t_DIVIDE = r'/'
    958 t_LPAREN = r'\('
    959 t_RPAREN = r'\)'
    960
    961 # A regular expression rule with some action code
    962 def t_NUMBER(t):
    963 r'\d+'
    964 t.value = int(t.value)
    965 return t
    966
    967 # Define a rule so we can track line numbers
    968 def t_newline(t):
    969 r'\n+'
    970 t.lexer.lineno += len(t.value)
    971
    972 # A string containing ignored characters (spaces and tabs)
    973 t_ignore = ' \t'
    974
    975 # Error handling rule
    976 def t_error(t):
    977 print "Illegal character '%s'" % t.value[0]
    978 t.lexer.skip(1)
    979
    980 # Build the lexer from my environment and return it
    981 return lex.lex()
    982</pre>
    983</blockquote>
    984
    985
    986<H3><a name="ply_nn18"></a>4.15 Maintaining state</H3>
    987
    988
    989In your lexer, you may want to maintain a variety of state
    990information. This might include mode settings, symbol tables, and
    991other details. As an example, suppose that you wanted to keep
    992track of how many NUMBER tokens had been encountered.
    993
    994<p>
    995One way to do this is to keep a set of global variables in the module
    996where you created the lexer. For example:
    997
    998<blockquote>
    999<pre>
    904num_count = 0
    905def t_NUMBER(t):
    906 r'\d+'
    907 global num_count
    908 num_count += 1
    1000num_count = 0
    1001def t_NUMBER(t):
    1002 r'\d+'
    1003 global num_count
    1004 num_count += 1
    909 try:
    910 t.value = int(t.value)
    911 except ValueError:
    912 print "Line %d: Number %s is too large!" % (t.lineno,t.value)
    913 t.value = 0
    1005 t.value = int(t.value)
    914 return t
    915</pre>
    916</blockquote>
    917
    1006 return t
    1007</pre>
    1008</blockquote>
    1009
    918Alternatively, you can store this information inside the Lexer object created by <tt>lex()</tt>. To this, you can use the <tt>lexer</tt> attribute
    919of tokens passed to the various rules. For example:
    1010If you don't like the use of a global variable, another place to store
    1011information is inside the Lexer object created by <tt>lex()</tt>.
    1012To this, you can use the <tt>lexer</tt> attribute of tokens passed to
    1013the various rules. For example:
    920
    921<blockquote>
    922<pre>
    923def t_NUMBER(t):
    924 r'\d+'
    925 t.lexer.num_count += 1 # Note use of lexer attribute
    1014
    1015<blockquote>
    1016<pre>
    1017def t_NUMBER(t):
    1018 r'\d+'
    1019 t.lexer.num_count += 1 # Note use of lexer attribute
    926 try:
    927 t.value = int(t.value)
    928 except ValueError:
    929 print "Line %d: Number %s is too large!" % (t.lineno,t.value)
    930 t.value = 0
    1020 t.value = int(t.value)
    931 return t
    932
    933lexer = lex.lex()
    934lexer.num_count = 0 # Set the initial count
    935</pre>
    936</blockquote>
    937
    1021 return t
    1022
    1023lexer = lex.lex()
    1024lexer.num_count = 0 # Set the initial count
    1025</pre>
    1026</blockquote>
    1027
    938This latter approach has the advantage of storing information inside
    939the lexer itself---something that may be useful if multiple instances
    940of the same lexer have been created. However, it may also feel kind
    941of "hacky" to the purists. Just to put their mind at some ease, all
    1028This latter approach has the advantage of being simple and working
    1029correctly in applications where multiple instantiations of a given
    1030lexer exist in the same application. However, this might also feel
    1031like a gross violation of encapsulation to OO purists.
    1032Just to put your mind at some ease, all
    942internal attributes of the lexer (with the exception of <tt>lineno</tt>) have names that are prefixed
    943by <tt>lex</tt> (e.g., <tt>lexdata</tt>,<tt>lexpos</tt>, etc.). Thus,
    1033internal attributes of the lexer (with the exception of <tt>lineno</tt>) have names that are prefixed
    1034by <tt>lex</tt> (e.g., <tt>lexdata</tt>,<tt>lexpos</tt>, etc.). Thus,
    944it should be perfectly safe to store attributes in the lexer that
    945don't have names starting with that prefix.
    1035it is perfectly safe to store attributes in the lexer that
    1036don't have names starting with that prefix or a name that conlicts with one of the
    1037predefined methods (e.g., <tt>input()</tt>, <tt>token()</tt>, etc.).
    946
    947<p>
    1038
    1039<p>
    948A third approach is to define the lexer as a class as shown in the previous example:
    1040If you don't like assigning values on the lexer object, you can define your lexer as a class as
    1041shown in the previous section:
    949
    950<blockquote>
    951<pre>
    952class MyLexer:
    953 ...
    954 def t_NUMBER(self,t):
    955 r'\d+'
    956 self.num_count += 1
    1042
    1043<blockquote>
    1044<pre>
    1045class MyLexer:
    1046 ...
    1047 def t_NUMBER(self,t):
    1048 r'\d+'
    1049 self.num_count += 1
    957 try:
    958 t.value = int(t.value)
    959 except ValueError:
    960 print "Line %d: Number %s is too large!" % (t.lineno,t.value)
    961 t.value = 0
    1050 t.value = int(t.value)
    962 return t
    963
    964 def build(self, **kwargs):
    965 self.lexer = lex.lex(object=self,**kwargs)
    966
    967 def __init__(self):
    968 self.num_count = 0
    1051 return t
    1052
    1053 def build(self, **kwargs):
    1054 self.lexer = lex.lex(object=self,**kwargs)
    1055
    1056 def __init__(self):
    1057 self.num_count = 0
    969
    970# Create a lexer
    971m = MyLexer()
    972lexer = lex.lex(object=m)
    973</pre>
    974</blockquote>
    975
    1058</pre>
    1059</blockquote>
    1060
    976The class approach may be the easiest to manage if your application is going to be creating multiple instances of the same lexer and
    977you need to manage a lot of state.
    1061The class approach may be the easiest to manage if your application is
    1062going to be creating multiple instances of the same lexer and you need
    1063to manage a lot of state.
    978
    1064
    979<H3><a name="ply_nn19"></a>3.16 Duplicating lexers</H3>
    1065<p>
    1066State can also be managed through closures. For example, in Python 3:
    980
    1067
    1068<blockquote>
    1069<pre>
    1070def MyLexer():
    1071 num_count = 0
    1072 ...
    1073 def t_NUMBER(t):
    1074 r'\d+'
    1075 nonlocal num_count
    1076 num_count += 1
    1077 t.value = int(t.value)
    1078 return t
    1079 ...
    1080</pre>
    1081</blockquote>
    981
    1082
    982<b>NOTE: I am thinking about deprecating this feature. Post comments on <a href="http://groups.google.com/group/ply-hack">ply-hack@googlegroups.com</a> or send me a private email at dave@dabeaz.com.</b>
    1083<H3><a name="ply_nn19"></a>4.16 Lexer cloning</H3>
    983
    1084
    1085
    984<p>
    1086<p>
    985If necessary, a lexer object can be quickly duplicated by invoking its <tt>clone()</tt> method. For example:
    1087If necessary, a lexer object can be duplicated by invoking its clone() method. For example:
    986
    987<blockquote>
    988<pre>
    989lexer = lex.lex()
    990...
    991newlexer = lexer.clone()
    992</pre>
    993</blockquote>
    994
    1088
    1089<blockquote>
    1090<pre>
    1091lexer = lex.lex()
    1092...
    1093newlexer = lexer.clone()
    1094</pre>
    1095</blockquote>
    1096
    995When a lexer is cloned, the copy is identical to the original lexer,
    996including any input text. However, once created, different text can be
    997fed to the clone which can be used independently. This capability may
    998be useful in situations when you are writing a parser/compiler that
    1097When a lexer is cloned, the copy is exactly identical to the original lexer
    1098including any input text and internal state. However, the clone allows a
    1099different set of input text to be supplied which may be processed separately.
    1100This may be useful in situations when you are writing a parser/compiler that
    999involves recursive or reentrant processing. For instance, if you
    1000needed to scan ahead in the input for some reason, you could create a
    1101involves recursive or reentrant processing. For instance, if you
    1102needed to scan ahead in the input for some reason, you could create a
    1001clone and use it to look ahead.
    1103clone and use it to look ahead. Or, if you were implementing some kind of preprocessor,
    1104cloned lexers could be used to handle different input files.
    1002
    1003<p>
    1105
    1106<p>
    1004The advantage of using <tt>clone()</tt> instead of reinvoking <tt>lex()</tt> is
    1005that it is significantly faster. Namely, it is not necessary to re-examine all of the
    1006token rules, build a regular expression, and construct internal tables. All of this
    1007information can simply be reused in the new lexer.
    1107Creating a clone is different than calling <tt>lex.lex()</tt> in that
    1108PLY doesn't regenerate any of the internal tables or regular expressions. So,
    1008
    1009<p>
    1109
    1110<p>
    1010Special considerations need to be made when cloning a lexer that is defined as a class. Previous sections
    1011showed an example of a class <tt>MyLexer</tt>. If you have the following code:
    1111Special considerations need to be made when cloning lexers that also
    1112maintain their own internal state using classes or closures. Namely,
    1113you need to be aware that the newly created lexers will share all of
    1114this state with the original lexer. For example, if you defined a
    1115lexer as a class and did this:
    1012
    1013<blockquote>
    1014<pre>
    1015m = MyLexer()
    1016a = lex.lex(object=m) # Create a lexer
    1017
    1018b = a.clone() # Clone the lexer
    1019</pre>
    1020</blockquote>
    1021
    1022Then both <tt>a</tt> and <tt>b</tt> are going to be bound to the same
    1116
    1117<blockquote>
    1118<pre>
    1119m = MyLexer()
    1120a = lex.lex(object=m) # Create a lexer
    1121
    1122b = a.clone() # Clone the lexer
    1123</pre>
    1124</blockquote>
    1125
    1126Then both <tt>a</tt> and <tt>b</tt> are going to be bound to the same
    1023object <tt>m</tt>. If the object <tt>m</tt> contains internal state
    1024related to lexing, this sharing may lead to quite a bit of confusion. To fix this,
    1025the <tt>clone()</tt> method accepts an optional argument that can be used to supply a new object. This
    1026can be used to clone the lexer and bind it to a new instance. For example:
    1127object <tt>m</tt> and any changes to <tt>m</tt> will be reflected in both lexers. It's
    1128important to emphasize that <tt>clone()</tt> is only meant to create a new lexer
    1129that reuses the regular expressions and environment of another lexer. If you
    1130need to make a totally new copy of a lexer, then call <tt>lex()</tt> again.
    1027
    1131
    1028<blockquote>
    1029<pre>
    1030m = MyLexer() # Create a lexer
    1031a = lex.lex(object=m)
    1132<H3><a name="ply_nn20"></a>4.17 Internal lexer state</H3>
    1032
    1133
    1033# Create a clone
    1034n = MyLexer() # New instance of MyLexer
    1035b = a.clone(n) # New lexer bound to n
    1036</pre>
    1037</blockquote>
    1038
    1134
    1039It may make sense to encapsulate all of this inside a method:
    1040
    1041<blockquote>
    1042<pre>
    1043class MyLexer:
    1044 ...
    1045 def clone(self):
    1046 c = MyLexer() # Create a new instance of myself
    1047 # Copy attributes from self to c as appropriate
    1048 ...
    1049 # Clone the lexer
    1050 c.lexer = self.lexer.clone(c)
    1051 return c
    1052</pre>
    1053</blockquote>
    1054
    1055The fact that a new instance of <tt>MyLexer</tt> may be created while cloning a lexer is the reason why you should never
    1056invoke <tt>lex.lex()</tt> inside <tt>__init__()</tt>. If you do, the lexer will be rebuilt from scratch and you lose
    1057all of the performance benefits of using <tt>clone()</tt> in the first place.
    1058
    1059<H3><a name="ply_nn20"></a>3.17 Internal lexer state</H3>
    1060
    1061
    1062A Lexer object <tt>lexer</tt> has a number of internal attributes that may be useful in certain
    1063situations.
    1064
    1065<p>
    1066<tt>lexer.lexpos</tt>
    1067<blockquote>
    1068This attribute is an integer that contains the current position within the input text. If you modify
    1069the value, it will change the result of the next call to <tt>token()</tt>. Within token rule functions, this points
    1070to the first character <em>after</em> the matched text. If the value is modified within a rule, the next returned token will be
    1071matched at the new position.
    1072</blockquote>
    1073
    1074<p>
    1075<tt>lexer.lineno</tt>
    1076<blockquote>
    1135A Lexer object <tt>lexer</tt> has a number of internal attributes that may be useful in certain
    1136situations.
    1137
    1138<p>
    1139<tt>lexer.lexpos</tt>
    1140<blockquote>
    1141This attribute is an integer that contains the current position within the input text. If you modify
    1142the value, it will change the result of the next call to <tt>token()</tt>. Within token rule functions, this points
    1143to the first character <em>after</em> the matched text. If the value is modified within a rule, the next returned token will be
    1144matched at the new position.
    1145</blockquote>
    1146
    1147<p>
    1148<tt>lexer.lineno</tt>
    1149<blockquote>
    1077The current value of the line number attribute stored in the lexer. This can be modified as needed to
    1078change the line number.
    1150The current value of the line number attribute stored in the lexer. PLY only specifies that the attribute
    1151exists---it never sets, updates, or performs any processing with it. If you want to track line numbers,
    1152you will need to add code yourself (see the section on line numbers and positional information).
    1079</blockquote>
    1080
    1081<p>
    1082<tt>lexer.lexdata</tt>
    1083<blockquote>
    1084The current input text stored in the lexer. This is the string passed with the <tt>input()</tt> method. It
    1085would probably be a bad idea to modify this unless you really know what you're doing.
    1086</blockquote>
    1087
    1088<P>
    1089<tt>lexer.lexmatch</tt>
    1090<blockquote>
    1091This is the raw <tt>Match</tt> object returned by the Python <tt>re.match()</tt> function (used internally by PLY) for the
    1092current token. If you have written a regular expression that contains named groups, you can use this to retrieve those values.
    1153</blockquote>
    1154
    1155<p>
    1156<tt>lexer.lexdata</tt>
    1157<blockquote>
    1158The current input text stored in the lexer. This is the string passed with the <tt>input()</tt> method. It
    1159would probably be a bad idea to modify this unless you really know what you're doing.
    1160</blockquote>
    1161
    1162<P>
    1163<tt>lexer.lexmatch</tt>
    1164<blockquote>
    1165This is the raw <tt>Match</tt> object returned by the Python <tt>re.match()</tt> function (used internally by PLY) for the
    1166current token. If you have written a regular expression that contains named groups, you can use this to retrieve those values.
    1167Note: This attribute is only updated when tokens are defined and processed by functions.
    1093</blockquote>
    1094
    1168</blockquote>
    1169
    1095<H3><a name="ply_nn21"></a>3.18 Conditional lexing and start conditions</H3>
    1170<H3><a name="ply_nn21"></a>4.18 Conditional lexing and start conditions</H3>
    1096
    1097
    1098In advanced parsing applications, it may be useful to have different
    1099lexing states. For instance, you may want the occurrence of a certain
    1100token or syntactic construct to trigger a different kind of lexing.
    1101PLY supports a feature that allows the underlying lexer to be put into
    1102a series of different states. Each state can have its own tokens,
    1103lexing rules, and so forth. The implementation is based largely on

    --- 182 unchanged lines hidden (view full) ---

    1286</blockquote>
    1287
    1288In this example, the occurrence of the first '{' causes the lexer to record the starting position and enter a new state <tt>'ccode'</tt>. A collection of rules then match
    1289various parts of the input that follow (comments, strings, etc.). All of these rules merely discard the token (by not returning a value).
    1290However, if the closing right brace is encountered, the rule <tt>t_ccode_rbrace</tt> collects all of the code (using the earlier recorded starting
    1291position), stores it, and returns a token 'CCODE' containing all of that text. When returning the token, the lexing state is restored back to its
    1292initial state.
    1293
    1171
    1172
    1173In advanced parsing applications, it may be useful to have different
    1174lexing states. For instance, you may want the occurrence of a certain
    1175token or syntactic construct to trigger a different kind of lexing.
    1176PLY supports a feature that allows the underlying lexer to be put into
    1177a series of different states. Each state can have its own tokens,
    1178lexing rules, and so forth. The implementation is based largely on

    --- 182 unchanged lines hidden (view full) ---

    1361</blockquote>
    1362
    1363In this example, the occurrence of the first '{' causes the lexer to record the starting position and enter a new state <tt>'ccode'</tt>. A collection of rules then match
    1364various parts of the input that follow (comments, strings, etc.). All of these rules merely discard the token (by not returning a value).
    1365However, if the closing right brace is encountered, the rule <tt>t_ccode_rbrace</tt> collects all of the code (using the earlier recorded starting
    1366position), stores it, and returns a token 'CCODE' containing all of that text. When returning the token, the lexing state is restored back to its
    1367initial state.
    1368
    1294<H3><a name="ply_nn21"></a>3.19 Miscellaneous Issues</H3>
    1369<H3><a name="ply_nn21"></a>4.19 Miscellaneous Issues</H3>
    1295
    1296
    1297<P>
    1298<li>The lexer requires input to be supplied as a single input string. Since most machines have more than enough memory, this
    1299rarely presents a performance concern. However, it means that the lexer currently can't be used with streaming data
    1300such as open files or sockets. This limitation is primarily a side-effect of using the <tt>re</tt> module.
    1301
    1302<p>

    --- 23 unchanged lines hidden (view full) ---

    1326it only needs to conform to the following requirements:
    1327
    1328<ul>
    1329<li>It must provide a <tt>token()</tt> method that returns the next token or <tt>None</tt> if no more
    1330tokens are available.
    1331<li>The <tt>token()</tt> method must return an object <tt>tok</tt> that has <tt>type</tt> and <tt>value</tt> attributes.
    1332</ul>
    1333
    1370
    1371
    1372<P>
    1373<li>The lexer requires input to be supplied as a single input string. Since most machines have more than enough memory, this
    1374rarely presents a performance concern. However, it means that the lexer currently can't be used with streaming data
    1375such as open files or sockets. This limitation is primarily a side-effect of using the <tt>re</tt> module.
    1376
    1377<p>

    --- 23 unchanged lines hidden (view full) ---

    1401it only needs to conform to the following requirements:
    1402
    1403<ul>
    1404<li>It must provide a <tt>token()</tt> method that returns the next token or <tt>None</tt> if no more
    1405tokens are available.
    1406<li>The <tt>token()</tt> method must return an object <tt>tok</tt> that has <tt>type</tt> and <tt>value</tt> attributes.
    1407</ul>
    1408
    1334<H2><a name="ply_nn22"></a>4. Parsing basics</H2>
    1409<H2><a name="ply_nn22"></a>5. Parsing basics</H2>
    1335
    1336
    1337<tt>yacc.py</tt> is used to parse language syntax. Before showing an
    1338example, there are a few important bits of background that must be
    1339mentioned. First, <em>syntax</em> is usually specified in terms of a BNF grammar.
    1340For example, if you wanted to parse
    1341simple arithmetic expressions, you might first write an unambiguous
    1342grammar specification like this:

    --- 9 unchanged lines hidden (view full) ---

    1352 | factor
    1353
    1354factor : NUMBER
    1355 | ( expression )
    1356</pre>
    1357</blockquote>
    1358
    1359In the grammar, symbols such as <tt>NUMBER</tt>, <tt>+</tt>, <tt>-</tt>, <tt>*</tt>, and <tt>/</tt> are known
    1410
    1411
    1412<tt>yacc.py</tt> is used to parse language syntax. Before showing an
    1413example, there are a few important bits of background that must be
    1414mentioned. First, <em>syntax</em> is usually specified in terms of a BNF grammar.
    1415For example, if you wanted to parse
    1416simple arithmetic expressions, you might first write an unambiguous
    1417grammar specification like this:

    --- 9 unchanged lines hidden (view full) ---

    1427 | factor
    1428
    1429factor : NUMBER
    1430 | ( expression )
    1431</pre>
    1432</blockquote>
    1433
    1434In the grammar, symbols such as <tt>NUMBER</tt>, <tt>+</tt>, <tt>-</tt>, <tt>*</tt>, and <tt>/</tt> are known
    1360as <em>terminals</em> and correspond to raw input tokens. Identifiers such as <tt>term</tt> and <tt>factor</tt> refer to more
    1361complex rules, typically comprised of a collection of tokens. These identifiers are known as <em>non-terminals</em>.
    1435as terminals and correspond to raw input tokens. Identifiers such as term and factor refer to
    1436grammar rules comprised of a collection of terminals and other rules. These identifiers are known as <em>non-terminals</em>.
    1362<P>
    1437<P>
    1438
    1363The semantic behavior of a language is often specified using a
    1364technique known as syntax directed translation. In syntax directed
    1365translation, attributes are attached to each symbol in a given grammar
    1366rule along with an action. Whenever a particular grammar rule is
    1367recognized, the action describes what to do. For example, given the
    1368expression grammar above, you might write the specification for a
    1369simple calculator like this:
    1370

    --- 9 unchanged lines hidden (view full) ---

    1380 | term1 / factor term0.val = term1.val / factor.val
    1381 | factor term0.val = factor.val
    1382
    1383factor : NUMBER factor.val = int(NUMBER.lexval)
    1384 | ( expression ) factor.val = expression.val
    1385</pre>
    1386</blockquote>
    1387
    1439The semantic behavior of a language is often specified using a
    1440technique known as syntax directed translation. In syntax directed
    1441translation, attributes are attached to each symbol in a given grammar
    1442rule along with an action. Whenever a particular grammar rule is
    1443recognized, the action describes what to do. For example, given the
    1444expression grammar above, you might write the specification for a
    1445simple calculator like this:
    1446

    --- 9 unchanged lines hidden (view full) ---

    1456 | term1 / factor term0.val = term1.val / factor.val
    1457 | factor term0.val = factor.val
    1458
    1459factor : NUMBER factor.val = int(NUMBER.lexval)
    1460 | ( expression ) factor.val = expression.val
    1461</pre>
    1462</blockquote>
    1463
    1388A good way to think about syntax directed translation is to simply think of each symbol in the grammar as some
    1389kind of object. The semantics of the language are then expressed as a collection of methods/operations on these
    1390objects.
    1464A good way to think about syntax directed translation is to
    1465view each symbol in the grammar as a kind of object. Associated
    1466with each symbol is a value representing its "state" (for example, the
    1467<tt>val</tt> attribute above). Semantic
    1468actions are then expressed as a collection of functions or methods
    1469that operate on the symbols and associated values.
    1391
    1392<p>
    1393Yacc uses a parsing technique known as LR-parsing or shift-reduce parsing. LR parsing is a
    1394bottom up technique that tries to recognize the right-hand-side of various grammar rules.
    1395Whenever a valid right-hand-side is found in the input, the appropriate action code is triggered and the
    1396grammar symbols are replaced by the grammar symbol on the left-hand-side.
    1397
    1398<p>
    1470
    1471<p>
    1472Yacc uses a parsing technique known as LR-parsing or shift-reduce parsing. LR parsing is a
    1473bottom up technique that tries to recognize the right-hand-side of various grammar rules.
    1474Whenever a valid right-hand-side is found in the input, the appropriate action code is triggered and the
    1475grammar symbols are replaced by the grammar symbol on the left-hand-side.
    1476
    1477<p>
    1399LR parsing is commonly implemented by shifting grammar symbols onto a stack and looking at the stack and the next
    1400input token for patterns. The details of the algorithm can be found in a compiler text, but the
    1401following example illustrates the steps that are performed if you wanted to parse the expression
    1402<tt>3 + 5 * (10 - 20)</tt> using the grammar defined above:
    1478LR parsing is commonly implemented by shifting grammar symbols onto a
    1479stack and looking at the stack and the next input token for patterns that
    1480match one of the grammar rules.
    1481The details of the algorithm can be found in a compiler textbook, but the
    1482following example illustrates the steps that are performed if you
    1483wanted to parse the expression
    1484<tt>3 + 5 * (10 - 20)</tt> using the grammar defined above. In the example,
    1485the special symbol <tt>$</tt> represents the end of input.
    1403
    1486
    1487
    1404<blockquote>
    1405<pre>
    1406Step Symbol Stack Input Tokens Action
    1407---- --------------------- --------------------- -------------------------------
    1488<blockquote>
    1489<pre>
    1490Step Symbol Stack Input Tokens Action
    1491---- --------------------- --------------------- -------------------------------
    14081 $ 3 + 5 * ( 10 - 20 )$ Shift 3
    14092 $ 3 + 5 * ( 10 - 20 )$ Reduce factor : NUMBER
    14103 $ factor + 5 * ( 10 - 20 )$ Reduce term : factor
    14114 $ term + 5 * ( 10 - 20 )$ Reduce expr : term
    14125 $ expr + 5 * ( 10 - 20 )$ Shift +
    14136 $ expr + 5 * ( 10 - 20 )$ Shift 5
    14147 $ expr + 5 * ( 10 - 20 )$ Reduce factor : NUMBER
    14158 $ expr + factor * ( 10 - 20 )$ Reduce term : factor
    14169 $ expr + term * ( 10 - 20 )$ Shift *
    141710 $ expr + term * ( 10 - 20 )$ Shift (
    141811 $ expr + term * ( 10 - 20 )$ Shift 10
    141912 $ expr + term * ( 10 - 20 )$ Reduce factor : NUMBER
    142013 $ expr + term * ( factor - 20 )$ Reduce term : factor
    142114 $ expr + term * ( term - 20 )$ Reduce expr : term
    142215 $ expr + term * ( expr - 20 )$ Shift -
    142316 $ expr + term * ( expr - 20 )$ Shift 20
    142417 $ expr + term * ( expr - 20 )$ Reduce factor : NUMBER
    142518 $ expr + term * ( expr - factor )$ Reduce term : factor
    142619 $ expr + term * ( expr - term )$ Reduce expr : expr - term
    142720 $ expr + term * ( expr )$ Shift )
    142821 $ expr + term * ( expr ) $ Reduce factor : (expr)
    142922 $ expr + term * factor $ Reduce term : term * factor
    143023 $ expr + term $ Reduce expr : expr + term
    143124 $ expr $ Reduce expr
    143225 $ $ Success!
    14921 3 + 5 * ( 10 - 20 )$ Shift 3
    14932 3 + 5 * ( 10 - 20 )$ Reduce factor : NUMBER
    14943 factor + 5 * ( 10 - 20 )$ Reduce term : factor
    14954 term + 5 * ( 10 - 20 )$ Reduce expr : term
    14965 expr + 5 * ( 10 - 20 )$ Shift +
    14976 expr + 5 * ( 10 - 20 )$ Shift 5
    14987 expr + 5 * ( 10 - 20 )$ Reduce factor : NUMBER
    14998 expr + factor * ( 10 - 20 )$ Reduce term : factor
    15009 expr + term * ( 10 - 20 )$ Shift *
    150110 expr + term * ( 10 - 20 )$ Shift (
    150211 expr + term * ( 10 - 20 )$ Shift 10
    150312 expr + term * ( 10 - 20 )$ Reduce factor : NUMBER
    150413 expr + term * ( factor - 20 )$ Reduce term : factor
    150514 expr + term * ( term - 20 )$ Reduce expr : term
    150615 expr + term * ( expr - 20 )$ Shift -
    150716 expr + term * ( expr - 20 )$ Shift 20
    150817 expr + term * ( expr - 20 )$ Reduce factor : NUMBER
    150918 expr + term * ( expr - factor )$ Reduce term : factor
    151019 expr + term * ( expr - term )$ Reduce expr : expr - term
    151120 expr + term * ( expr )$ Shift )
    151221 expr + term * ( expr ) $ Reduce factor : (expr)
    151322 expr + term * factor $ Reduce term : term * factor
    151423 expr + term $ Reduce expr : expr + term
    151524 expr $ Reduce expr
    151625 $ Success!
    1433</pre>
    1434</blockquote>
    1435
    1517</pre>
    1518</blockquote>
    1519
    1436When parsing the expression, an underlying state machine and the current input token determine what to do next.
    1437If the next token looks like part of a valid grammar rule (based on other items on the stack), it is generally shifted
    1438onto the stack. If the top of the stack contains a valid right-hand-side of a grammar rule, it is
    1439usually "reduced" and the symbols replaced with the symbol on the left-hand-side. When this reduction occurs, the
    1440appropriate action is triggered (if defined). If the input token can't be shifted and the top of stack doesn't match
    1441any grammar rules, a syntax error has occurred and the parser must take some kind of recovery step (or bail out).
    1520When parsing the expression, an underlying state machine and the
    1521current input token determine what happens next. If the next token
    1522looks like part of a valid grammar rule (based on other items on the
    1523stack), it is generally shifted onto the stack. If the top of the
    1524stack contains a valid right-hand-side of a grammar rule, it is
    1525usually "reduced" and the symbols replaced with the symbol on the
    1526left-hand-side. When this reduction occurs, the appropriate action is
    1527triggered (if defined). If the input token can't be shifted and the
    1528top of stack doesn't match any grammar rules, a syntax error has
    1529occurred and the parser must take some kind of recovery step (or bail
    1530out). A parse is only successful if the parser reaches a state where
    1531the symbol stack is empty and there are no more input tokens.
    1442
    1443<p>
    1532
    1533<p>
    1444It is important to note that the underlying implementation is built around a large finite-state machine that is encoded
    1445in a collection of tables. The construction of these tables is quite complicated and beyond the scope of this discussion.
    1446However, subtle details of this process explain why, in the example above, the parser chooses to shift a token
    1447onto the stack in step 9 rather than reducing the rule <tt>expr : expr + term</tt>.
    1534It is important to note that the underlying implementation is built
    1535around a large finite-state machine that is encoded in a collection of
    1536tables. The construction of these tables is non-trivial and
    1537beyond the scope of this discussion. However, subtle details of this
    1538process explain why, in the example above, the parser chooses to shift
    1539a token onto the stack in step 9 rather than reducing the
    1540rule <tt>expr : expr + term</tt>.
    1448
    1541
    1449<H2><a name="ply_nn23"></a>5. Yacc reference</H2>
    1542<H2><a name="ply_nn23"></a>6. Yacc</H2>
    1450
    1451
    1543
    1544
    1452This section describes how to use write parsers in PLY.
    1545The <tt>ply.yacc</tt> module implements the parsing component of PLY.
    1546The name "yacc" stands for "Yet Another Compiler Compiler" and is
    1547borrowed from the Unix tool of the same name.
    1453
    1548
    1454<H3><a name="ply_nn24"></a>5.1 An example</H3>
    1549<H3><a name="ply_nn24"></a>6.1 An example</H3>
    1455
    1456
    1457Suppose you wanted to make a grammar for simple arithmetic expressions as previously described. Here is
    1458how you would do it with <tt>yacc.py</tt>:
    1459
    1460<blockquote>
    1461<pre>
    1462# Yacc example

    --- 35 unchanged lines hidden (view full) ---

    1498 'factor : LPAREN expression RPAREN'
    1499 p[0] = p[2]
    1500
    1501# Error rule for syntax errors
    1502def p_error(p):
    1503 print "Syntax error in input!"
    1504
    1505# Build the parser
    1550
    1551
    1552Suppose you wanted to make a grammar for simple arithmetic expressions as previously described. Here is
    1553how you would do it with <tt>yacc.py</tt>:
    1554
    1555<blockquote>
    1556<pre>
    1557# Yacc example

    --- 35 unchanged lines hidden (view full) ---

    1593 'factor : LPAREN expression RPAREN'
    1594 p[0] = p[2]
    1595
    1596# Error rule for syntax errors
    1597def p_error(p):
    1598 print "Syntax error in input!"
    1599
    1600# Build the parser
    1506yacc.yacc()
    1601parser = yacc.yacc()
    1507
    1602
    1508# Use this if you want to build the parser using SLR instead of LALR
    1509# yacc.yacc(method="SLR")
    1510
    1511while 1:
    1603while True:
    1512 try:
    1513 s = raw_input('calc > ')
    1514 except EOFError:
    1515 break
    1516 if not s: continue
    1604 try:
    1605 s = raw_input('calc > ')
    1606 except EOFError:
    1607 break
    1608 if not s: continue
    1517 result = yacc.parse(s)
    1609 result = parser.parse(s)
    1518 print result
    1519</pre>
    1520</blockquote>
    1521
    1610 print result
    1611</pre>
    1612</blockquote>
    1613
    1522In this example, each grammar rule is defined by a Python function where the docstring to that function contains the
    1523appropriate context-free grammar specification. Each function accepts a single
    1524argument <tt>p</tt> that is a sequence containing the values of each grammar symbol in the corresponding rule. The values of
    1525<tt>p[i]</tt> are mapped to grammar symbols as shown here:
    1614In this example, each grammar rule is defined by a Python function
    1615where the docstring to that function contains the appropriate
    1616context-free grammar specification. The statements that make up the
    1617function body implement the semantic actions of the rule. Each function
    1618accepts a single argument <tt>p</tt> that is a sequence containing the
    1619values of each grammar symbol in the corresponding rule. The values
    1620of <tt>p[i]</tt> are mapped to grammar symbols as shown here:
    1526
    1527<blockquote>
    1528<pre>
    1529def p_expression_plus(p):
    1530 'expression : expression PLUS term'
    1531 # ^ ^ ^ ^
    1532 # p[0] p[1] p[2] p[3]
    1533
    1534 p[0] = p[1] + p[3]
    1535</pre>
    1536</blockquote>
    1537
    1621
    1622<blockquote>
    1623<pre>
    1624def p_expression_plus(p):
    1625 'expression : expression PLUS term'
    1626 # ^ ^ ^ ^
    1627 # p[0] p[1] p[2] p[3]
    1628
    1629 p[0] = p[1] + p[3]
    1630</pre>
    1631</blockquote>
    1632
    1633<p>
    1538For tokens, the "value" of the corresponding <tt>p[i]</tt> is the
    1634For tokens, the "value" of the corresponding <tt>p[i]</tt> is the
    1539same as the p.value attribute assigned
    1540in the lexer module. For non-terminals, the value is determined by
    1541whatever is placed in <tt>p[0]</tt> when rules are reduced. This
    1542value can be anything at all. However, it probably most common for
    1543the value to be a simple Python type, a tuple, or an instance. In this example, we
    1544are relying on the fact that the <tt>NUMBER</tt> token stores an integer value in its value
    1545field. All of the other rules simply perform various types of integer operations and store
    1546the result.
    1635<em>same</em> as the <tt>p.value</tt> attribute assigned in the lexer
    1636module. For non-terminals, the value is determined by whatever is
    1637placed in <tt>p[0]</tt> when rules are reduced. This value can be
    1638anything at all. However, it probably most common for the value to be
    1639a simple Python type, a tuple, or an instance. In this example, we
    1640are relying on the fact that the NUMBER token stores an
    1641integer value in its value field. All of the other rules simply
    1642perform various types of integer operations and propagate the result.
    1643</p>
    1547
    1644
    1548<P>
    1549Note: The use of negative indices have a special meaning in yacc---specially <tt>p[-1]</tt> does
    1550not have the same value as <tt>p[3]</tt> in this example. Please see the section on "Embedded Actions" for further
    1551details.
    1645<p>
    1646Note: The use of negative indices have a special meaning in
    1647yacc---specially <tt>p[-1]</tt> does not have the same value
    1648as <tt>p[3]</tt> in this example. Please see the section on "Embedded
    1649Actions" for further details.
    1650</p>
    1552
    1553<p>
    1651
    1652<p>
    1554The first rule defined in the yacc specification determines the starting grammar
    1555symbol (in this case, a rule for <tt>expression</tt> appears first). Whenever
    1556the starting rule is reduced by the parser and no more input is available, parsing
    1557stops and the final value is returned (this value will be whatever the top-most rule
    1558placed in <tt>p[0]</tt>). Note: an alternative starting symbol can be specified using the <tt>start</tt> keyword argument to
    1653The first rule defined in the yacc specification determines the
    1654starting grammar symbol (in this case, a rule for <tt>expression</tt>
    1655appears first). Whenever the starting rule is reduced by the parser
    1656and no more input is available, parsing stops and the final value is
    1657returned (this value will be whatever the top-most rule placed
    1658in <tt>p[0]</tt>). Note: an alternative starting symbol can be
    1659specified using the <tt>start</tt> keyword argument to
    1559<tt>yacc()</tt>.
    1560
    1660<tt>yacc()</tt>.
    1661
    1561<p>The <tt>p_error(p)</tt> rule is defined to catch syntax errors. See the error handling section
    1562below for more detail.
    1662

    The p_error(p) rule is defined to catch syntax errors.
    1663See the error handling section below for more detail.

    1563
    1564<p>
    1664
    1665<p>
    1565To build the parser, call the <tt>yacc.yacc()</tt> function. This function
    1566looks at the module and attempts to construct all of the LR parsing tables for the grammar
    1567you have specified. The first time <tt>yacc.yacc()</tt> is invoked, you will get a message
    1568such as this:
    1666To build the parser, call the yacc.yacc() function. This
    1667function looks at the module and attempts to construct all of the LR
    1668parsing tables for the grammar you have specified. The first
    1669time <tt>yacc.yacc()</tt> is invoked, you will get a message such as
    1670this:
    1569
    1570<blockquote>
    1571<pre>
    1572$ python calcparse.py
    1671
    1672<blockquote>
    1673<pre>
    1674$ python calcparse.py
    1573yacc: Generating LALR parsing table...
    1675Generating LALR tables
    1574calc >
    1575</pre>
    1576</blockquote>
    1577
    1578Since table construction is relatively expensive (especially for large
    1579grammars), the resulting parsing table is written to the current
    1580directory in a file called <tt>parsetab.py</tt>. In addition, a
    1581debugging file called <tt>parser.out</tt> is created. On subsequent
    1582executions, <tt>yacc</tt> will reload the table from
    1583<tt>parsetab.py</tt> unless it has detected a change in the underlying
    1584grammar (in which case the tables and <tt>parsetab.py</tt> file are
    1676calc >
    1677</pre>
    1678</blockquote>
    1679
    1680Since table construction is relatively expensive (especially for large
    1681grammars), the resulting parsing table is written to the current
    1682directory in a file called <tt>parsetab.py</tt>. In addition, a
    1683debugging file called <tt>parser.out</tt> is created. On subsequent
    1684executions, <tt>yacc</tt> will reload the table from
    1685<tt>parsetab.py</tt> unless it has detected a change in the underlying
    1686grammar (in which case the tables and <tt>parsetab.py</tt> file are
    1585regenerated). Note: The names of parser output files can be changed if necessary. See the notes that follow later.
    1687regenerated). Note: The names of parser output files can be changed
    1688if necessary. See the <a href="reference.html">PLY Reference</a> for details.
    1586
    1587<p>
    1588If any errors are detected in your grammar specification, <tt>yacc.py</tt> will produce
    1589diagnostic messages and possibly raise an exception. Some of the errors that can be detected include:
    1590
    1591<ul>
    1592<li>Duplicated function names (if more than one rule function have the same name in the grammar file).
    1593<li>Shift/reduce and reduce/reduce conflicts generated by ambiguous grammars.
    1594<li>Badly specified grammar rules.
    1595<li>Infinite recursion (rules that can never terminate).
    1596<li>Unused rules and tokens
    1597<li>Undefined rules and tokens
    1598</ul>
    1599
    1689
    1690<p>
    1691If any errors are detected in your grammar specification, <tt>yacc.py</tt> will produce
    1692diagnostic messages and possibly raise an exception. Some of the errors that can be detected include:
    1693
    1694<ul>
    1695<li>Duplicated function names (if more than one rule function have the same name in the grammar file).
    1696<li>Shift/reduce and reduce/reduce conflicts generated by ambiguous grammars.
    1697<li>Badly specified grammar rules.
    1698<li>Infinite recursion (rules that can never terminate).
    1699<li>Unused rules and tokens
    1700<li>Undefined rules and tokens
    1701</ul>
    1702
    1600The next few sections now discuss a few finer points of grammar construction.
    1703The next few sections discuss grammar specification in more detail.
    1601
    1704
    1602<H3><a name="ply_nn25"></a>5.2 Combining Grammar Rule Functions</H3>
    1705<p>
    1706The final part of the example shows how to actually run the parser
    1707created by
    1708<tt>yacc()</tt>. To run the parser, you simply have to call
    1709the <tt>parse()</tt> with a string of input text. This will run all
    1710of the grammar rules and return the result of the entire parse. This
    1711result return is the value assigned to <tt>p[0]</tt> in the starting
    1712grammar rule.
    1603
    1713
    1714<H3><a name="ply_nn25"></a>6.2 Combining Grammar Rule Functions</H3>
    1604
    1715
    1716
    1605When grammar rules are similar, they can be combined into a single function.
    1606For example, consider the two rules in our earlier example:
    1607
    1608<blockquote>
    1609<pre>
    1610def p_expression_plus(p):
    1611 'expression : expression PLUS term'
    1612 p[0] = p[1] + p[3]

    --- 50 unchanged lines hidden (view full) ---

    1663 | MINUS expression'''
    1664 if (len(p) == 4):
    1665 p[0] = p[1] - p[3]
    1666 elif (len(p) == 3):
    1667 p[0] = -p[2]
    1668</pre>
    1669</blockquote>
    1670
    1717When grammar rules are similar, they can be combined into a single function.
    1718For example, consider the two rules in our earlier example:
    1719
    1720<blockquote>
    1721<pre>
    1722def p_expression_plus(p):
    1723 'expression : expression PLUS term'
    1724 p[0] = p[1] + p[3]

    --- 50 unchanged lines hidden (view full) ---

    1775 | MINUS expression'''
    1776 if (len(p) == 4):
    1777 p[0] = p[1] - p[3]
    1778 elif (len(p) == 3):
    1779 p[0] = -p[2]
    1780</pre>
    1781</blockquote>
    1782
    1671<H3><a name="ply_nn26"></a>5.3 Character Literals</H3>
    1783If parsing performance is a concern, you should resist the urge to put
    1784too much conditional processing into a single grammar rule as shown in
    1785these examples. When you add checks to see which grammar rule is
    1786being handled, you are actually duplicating the work that the parser
    1787has already performed (i.e., the parser already knows exactly what rule it
    1788matched). You can eliminate this overhead by using a
    1789separate <tt>p_rule()</tt> function for each grammar rule.
    1672
    1790
    1791<H3><a name="ply_nn26"></a>6.3 Character Literals</H3>
    1673
    1792
    1793
    1674If desired, a grammar may contain tokens defined as single character literals. For example:
    1675
    1676<blockquote>
    1677<pre>
    1678def p_binary_operators(p):
    1679 '''expression : expression '+' term
    1680 | expression '-' term
    1681 term : term '*' factor

    --- 17 unchanged lines hidden (view full) ---

    1699# Literals. Should be placed in module given to lex()
    1700literals = ['+','-','*','/' ]
    1701</pre>
    1702</blockquote>
    1703
    1704<b>Character literals are limited to a single character</b>. Thus, it is not legal to specify literals such as <tt>'&lt;='</tt> or <tt>'=='</tt>. For this, use
    1705the normal lexing rules (e.g., define a rule such as <tt>t_EQ = r'=='</tt>).
    1706
    1794If desired, a grammar may contain tokens defined as single character literals. For example:
    1795
    1796<blockquote>
    1797<pre>
    1798def p_binary_operators(p):
    1799 '''expression : expression '+' term
    1800 | expression '-' term
    1801 term : term '*' factor

    --- 17 unchanged lines hidden (view full) ---

    1819# Literals. Should be placed in module given to lex()
    1820literals = ['+','-','*','/' ]
    1821</pre>
    1822</blockquote>
    1823
    1824<b>Character literals are limited to a single character</b>. Thus, it is not legal to specify literals such as <tt>'&lt;='</tt> or <tt>'=='</tt>. For this, use
    1825the normal lexing rules (e.g., define a rule such as <tt>t_EQ = r'=='</tt>).
    1826
    1707<H3><a name="ply_nn26"></a>5.4 Empty Productions</H3>
    1827<H3><a name="ply_nn26"></a>6.4 Empty Productions</H3>
    1708
    1709
    1710<tt>yacc.py</tt> can handle empty productions by defining a rule like this:
    1711
    1712<blockquote>
    1713<pre>
    1714def p_empty(p):
    1715 'empty :'

    --- 7 unchanged lines hidden (view full) ---

    1723<pre>
    1724def p_optitem(p):
    1725 'optitem : item'
    1726 ' | empty'
    1727 ...
    1728</pre>
    1729</blockquote>
    1730
    1828
    1829
    1830<tt>yacc.py</tt> can handle empty productions by defining a rule like this:
    1831
    1832<blockquote>
    1833<pre>
    1834def p_empty(p):
    1835 'empty :'

    --- 7 unchanged lines hidden (view full) ---

    1843<pre>
    1844def p_optitem(p):
    1845 'optitem : item'
    1846 ' | empty'
    1847 ...
    1848</pre>
    1849</blockquote>
    1850
    1731Note: You can write empty rules anywhere by simply specifying an empty right hand side. However, I personally find that
    1732writing an "empty" rule and using "empty" to denote an empty production is easier to read.
    1851Note: You can write empty rules anywhere by simply specifying an empty
    1852right hand side. However, I personally find that writing an "empty"
    1853rule and using "empty" to denote an empty production is easier to read
    1854and more clearly states your intentions.
    1733
    1855
    1734<H3><a name="ply_nn28"></a>5.5 Changing the starting symbol</H3>
    1856<H3><a name="ply_nn28"></a>6.5 Changing the starting symbol</H3>
    1735
    1736
    1737Normally, the first rule found in a yacc specification defines the starting grammar rule (top level rule). To change this, simply
    1738supply a <tt>start</tt> specifier in your file. For example:
    1739
    1740<blockquote>
    1741<pre>
    1742start = 'foo'
    1743
    1744def p_bar(p):
    1745 'bar : A B'
    1746
    1747# This is the starting rule due to the start specifier above
    1748def p_foo(p):
    1749 'foo : bar X'
    1750...
    1751</pre>
    1752</blockquote>
    1753
    1857
    1858
    1859Normally, the first rule found in a yacc specification defines the starting grammar rule (top level rule). To change this, simply
    1860supply a <tt>start</tt> specifier in your file. For example:
    1861
    1862<blockquote>
    1863<pre>
    1864start = 'foo'
    1865
    1866def p_bar(p):
    1867 'bar : A B'
    1868
    1869# This is the starting rule due to the start specifier above
    1870def p_foo(p):
    1871 'foo : bar X'
    1872...
    1873</pre>
    1874</blockquote>
    1875
    1754The use of a <tt>start</tt> specifier may be useful during debugging since you can use it to have yacc build a subset of
    1755a larger grammar. For this purpose, it is also possible to specify a starting symbol as an argument to <tt>yacc()</tt>. For example:
    1876The use of a start specifier may be useful during debugging
    1877since you can use it to have yacc build a subset of a larger grammar.
    1878For this purpose, it is also possible to specify a starting symbol as
    1879an argument to <tt>yacc()</tt>. For example:
    1756
    1757<blockquote>
    1758<pre>
    1759yacc.yacc(start='foo')
    1760</pre>
    1761</blockquote>
    1762
    1880
    1881<blockquote>
    1882<pre>
    1883yacc.yacc(start='foo')
    1884</pre>
    1885</blockquote>
    1886
    1763<H3><a name="ply_nn27"></a>5.6 Dealing With Ambiguous Grammars</H3>
    1887<H3><a name="ply_nn27"></a>6.6 Dealing With Ambiguous Grammars</H3>
    1764
    1765
    1888
    1889
    1766The expression grammar given in the earlier example has been written in a special format to eliminate ambiguity.
    1767However, in many situations, it is extremely difficult or awkward to write grammars in this format. A
    1768much more natural way to express the grammar is in a more compact form like this:
    1890The expression grammar given in the earlier example has been written
    1891in a special format to eliminate ambiguity. However, in many
    1892situations, it is extremely difficult or awkward to write grammars in
    1893this format. A much more natural way to express the grammar is in a
    1894more compact form like this:
    1769
    1770<blockquote>
    1771<pre>
    1772expression : expression PLUS expression
    1773 | expression MINUS expression
    1774 | expression TIMES expression
    1775 | expression DIVIDE expression
    1776 | LPAREN expression RPAREN
    1777 | NUMBER
    1778</pre>
    1779</blockquote>
    1780
    1895
    1896<blockquote>
    1897<pre>
    1898expression : expression PLUS expression
    1899 | expression MINUS expression
    1900 | expression TIMES expression
    1901 | expression DIVIDE expression
    1902 | LPAREN expression RPAREN
    1903 | NUMBER
    1904</pre>
    1905</blockquote>
    1906
    1781Unfortunately, this grammar specification is ambiguous. For example, if you are parsing the string
    1782"3 * 4 + 5", there is no way to tell how the operators are supposed to be grouped.
    1783For example, does the expression mean "(3 * 4) + 5" or is it "3 * (4+5)"?
    1907Unfortunately, this grammar specification is ambiguous. For example,
    1908if you are parsing the string "3 * 4 + 5", there is no way to tell how
    1909the operators are supposed to be grouped. For example, does the
    1910expression mean "(3 * 4) + 5" or is it "3 * (4+5)"?
    1784
    1785<p>
    1911
    1912<p>
    1786When an ambiguous grammar is given to <tt>yacc.py</tt> it will print messages about "shift/reduce conflicts"
    1787or a "reduce/reduce conflicts". A shift/reduce conflict is caused when the parser generator can't decide
    1788whether or not to reduce a rule or shift a symbol on the parsing stack. For example, consider
    1789the string "3 * 4 + 5" and the internal parsing stack:
    1913When an ambiguous grammar is given to yacc.py it will print
    1914messages about "shift/reduce conflicts" or "reduce/reduce conflicts".
    1915A shift/reduce conflict is caused when the parser generator can't
    1916decide whether or not to reduce a rule or shift a symbol on the
    1917parsing stack. For example, consider the string "3 * 4 + 5" and the
    1918internal parsing stack:
    1790
    1791<blockquote>
    1792<pre>
    1793Step Symbol Stack Input Tokens Action
    1794---- --------------------- --------------------- -------------------------------
    17951 $ 3 * 4 + 5$ Shift 3
    17962 $ 3 * 4 + 5$ Reduce : expression : NUMBER
    17973 $ expr * 4 + 5$ Shift *
    17984 $ expr * 4 + 5$ Shift 4
    17995 $ expr * 4 + 5$ Reduce: expression : NUMBER
    18006 $ expr * expr + 5$ SHIFT/REDUCE CONFLICT ????
    1801</pre>
    1802</blockquote>
    1803
    1919
    1920<blockquote>
    1921<pre>
    1922Step Symbol Stack Input Tokens Action
    1923---- --------------------- --------------------- -------------------------------
    19241 $ 3 * 4 + 5$ Shift 3
    19252 $ 3 * 4 + 5$ Reduce : expression : NUMBER
    19263 $ expr * 4 + 5$ Shift *
    19274 $ expr * 4 + 5$ Shift 4
    19285 $ expr * 4 + 5$ Reduce: expression : NUMBER
    19296 $ expr * expr + 5$ SHIFT/REDUCE CONFLICT ????
    1930</pre>
    1931</blockquote>
    1932
    1804In this case, when the parser reaches step 6, it has two options. One is to reduce the
    1805rule <tt>expr : expr * expr</tt> on the stack. The other option is to shift the
    1806token <tt>+</tt> on the stack. Both options are perfectly legal from the rules
    1807of the context-free-grammar.
    1933In this case, when the parser reaches step 6, it has two options. One
    1934is to reduce the rule <tt>expr : expr * expr</tt> on the stack. The
    1935other option is to shift the token <tt>+</tt> on the stack. Both
    1936options are perfectly legal from the rules of the
    1937context-free-grammar.
    1808
    1809<p>
    1938
    1939<p>
    1810By default, all shift/reduce conflicts are resolved in favor of shifting. Therefore, in the above
    1811example, the parser will always shift the <tt>+</tt> instead of reducing. Although this
    1812strategy works in many cases (including the ambiguous if-then-else), it is not enough for arithmetic
    1813expressions. In fact, in the above example, the decision to shift <tt>+</tt> is completely wrong---we should have
    1814reduced <tt>expr * expr</tt> since multiplication has higher mathematical precedence than addition.
    1940By default, all shift/reduce conflicts are resolved in favor of
    1941shifting. Therefore, in the above example, the parser will always
    1942shift the <tt>+</tt> instead of reducing. Although this strategy
    1943works in many cases (for example, the case of
    1944"if-then" versus "if-then-else"), it is not enough for arithmetic expressions. In fact,
    1945in the above example, the decision to shift <tt>+</tt> is completely
    1946wrong---we should have reduced <tt>expr * expr</tt> since
    1947multiplication has higher mathematical precedence than addition.
    1815
    1948
    1816<p>To resolve ambiguity, especially in expression grammars, <tt>yacc.py</tt> allows individual
    1817tokens to be assigned a precedence level and associativity. This is done by adding a variable
    1949

    To resolve ambiguity, especially in expression
    1950grammars, <tt>yacc.py</tt> allows individual tokens to be assigned a
    1951precedence level and associativity. This is done by adding a variable

    1818<tt>precedence</tt> to the grammar file like this:
    1819
    1820<blockquote>
    1821<pre>
    1822precedence = (
    1823 ('left', 'PLUS', 'MINUS'),
    1824 ('left', 'TIMES', 'DIVIDE'),
    1825)
    1826</pre>
    1827</blockquote>
    1828
    1952<tt>precedence</tt> to the grammar file like this:
    1953
    1954<blockquote>
    1955<pre>
    1956precedence = (
    1957 ('left', 'PLUS', 'MINUS'),
    1958 ('left', 'TIMES', 'DIVIDE'),
    1959)
    1960</pre>
    1961</blockquote>
    1962
    1829This declaration specifies that PLUS/MINUS have
    1830the same precedence level and are left-associative and that
    1831<tt>TIMES</tt>/<tt>DIVIDE</tt> have the same precedence and are left-associative.
    1832Within the <tt>precedence</tt> declaration, tokens are ordered from lowest to highest precedence. Thus,
    1833this declaration specifies that <tt>TIMES</tt>/<tt>DIVIDE</tt> have higher
    1834precedence than <tt>PLUS</tt>/<tt>MINUS</tt> (since they appear later in the
    1963This declaration specifies that <tt>PLUS</tt>/<tt>MINUS</tt> have the
    1964same precedence level and are left-associative and that
    1965TIMES/DIVIDE have the same precedence and are
    1966left-associative. Within the <tt>precedence</tt> declaration, tokens
    1967are ordered from lowest to highest precedence. Thus, this declaration
    1968specifies that <tt>TIMES</tt>/<tt>DIVIDE</tt> have higher precedence
    1969than <tt>PLUS</tt>/<tt>MINUS</tt> (since they appear later in the
    1835precedence specification).
    1836
    1837<p>
    1970precedence specification).
    1971
    1972<p>
    1838The precedence specification works by associating a numerical precedence level value and associativity direction to
    1839the listed tokens. For example, in the above example you get:
    1973The precedence specification works by associating a numerical
    1974precedence level value and associativity direction to the listed
    1975tokens. For example, in the above example you get:
    1840
    1841<blockquote>
    1842<pre>
    1843PLUS : level = 1, assoc = 'left'
    1844MINUS : level = 1, assoc = 'left'
    1845TIMES : level = 2, assoc = 'left'
    1846DIVIDE : level = 2, assoc = 'left'
    1847</pre>
    1848</blockquote>
    1849
    1976
    1977<blockquote>
    1978<pre>
    1979PLUS : level = 1, assoc = 'left'
    1980MINUS : level = 1, assoc = 'left'
    1981TIMES : level = 2, assoc = 'left'
    1982DIVIDE : level = 2, assoc = 'left'
    1983</pre>
    1984</blockquote>
    1985
    1850These values are then used to attach a numerical precedence value and associativity direction
    1851to each grammar rule. <em>This is always determined by looking at the precedence of the right-most terminal symbol.</em>
    1852For example:
    1986These values are then used to attach a numerical precedence value and
    1987associativity direction to each grammar rule. <em>This is always
    1988determined by looking at the precedence of the right-most terminal
    1989symbol.</em> For example:
    1853
    1854<blockquote>
    1855<pre>
    1856expression : expression PLUS expression # level = 1, left
    1857 | expression MINUS expression # level = 1, left
    1858 | expression TIMES expression # level = 2, left
    1859 | expression DIVIDE expression # level = 2, left
    1860 | LPAREN expression RPAREN # level = None (not specified)
    1861 | NUMBER # level = None (not specified)
    1862</pre>
    1863</blockquote>
    1864
    1865When shift/reduce conflicts are encountered, the parser generator resolves the conflict by
    1866looking at the precedence rules and associativity specifiers.
    1867
    1868<p>
    1869<ol>
    1990
    1991<blockquote>
    1992<pre>
    1993expression : expression PLUS expression # level = 1, left
    1994 | expression MINUS expression # level = 1, left
    1995 | expression TIMES expression # level = 2, left
    1996 | expression DIVIDE expression # level = 2, left
    1997 | LPAREN expression RPAREN # level = None (not specified)
    1998 | NUMBER # level = None (not specified)
    1999</pre>
    2000</blockquote>
    2001
    2002When shift/reduce conflicts are encountered, the parser generator resolves the conflict by
    2003looking at the precedence rules and associativity specifiers.
    2004
    2005<p>
    2006<ol>
    1870
  • If the current token has higher precedence, it is shifted.
  • 2007<li>If the current token has higher precedence than the rule on the stack, it is shifted.
    1871<li>If the grammar rule on the stack has higher precedence, the rule is reduced.
    1872<li>If the current token and the grammar rule have the same precedence, the
    1873rule is reduced for left associativity, whereas the token is shifted for right associativity.
    1874<li>If nothing is known about the precedence, shift/reduce conflicts are resolved in
    1875favor of shifting (the default).
    1876</ol>
    1877
    2008<li>If the grammar rule on the stack has higher precedence, the rule is reduced.
    2009<li>If the current token and the grammar rule have the same precedence, the
    2010rule is reduced for left associativity, whereas the token is shifted for right associativity.
    2011<li>If nothing is known about the precedence, shift/reduce conflicts are resolved in
    2012favor of shifting (the default).
    2013</ol>
    2014
    1878For example, if "expression PLUS expression" has been parsed and the next token
    1879is "TIMES", the action is going to be a shift because "TIMES" has a higher precedence level than "PLUS". On the other
    1880hand, if "expression TIMES expression" has been parsed and the next token is "PLUS", the action
    1881is going to be reduce because "PLUS" has a lower precedence than "TIMES."
    2015For example, if "expression PLUS expression" has been parsed and the
    2016next token is "TIMES", the action is going to be a shift because
    2017"TIMES" has a higher precedence level than "PLUS". On the other hand,
    2018if "expression TIMES expression" has been parsed and the next token is
    2019"PLUS", the action is going to be reduce because "PLUS" has a lower
    2020precedence than "TIMES."
    1882
    1883<p>
    2021
    2022<p>
    1884When shift/reduce conflicts are resolved using the first three techniques (with the help of
    1885precedence rules), <tt>yacc.py</tt> will report no errors or conflicts in the grammar.
    2023When shift/reduce conflicts are resolved using the first three
    2024techniques (with the help of precedence rules), <tt>yacc.py</tt> will
    2025report no errors or conflicts in the grammar (although it will print
    2026some information in the <tt>parser.out</tt> debugging file).
    1886
    1887<p>
    2027
    2028<p>
    1888One problem with the precedence specifier technique is that it is sometimes necessary to
    1889change the precedence of an operator in certain contents. For example, consider a unary-minus operator
    1890in "3 + 4 * -5". Normally, unary minus has a very high precedence--being evaluated before the multiply.
    1891However, in our precedence specifier, MINUS has a lower precedence than TIMES. To deal with this,
    1892precedence rules can be given for fictitious tokens like this:
    2029One problem with the precedence specifier technique is that it is
    2030sometimes necessary to change the precedence of an operator in certain
    2031contexts. For example, consider a unary-minus operator in "3 + 4 *
    2032-5". Mathematically, the unary minus is normally given a very high
    2033precedence--being evaluated before the multiply. However, in our
    2034precedence specifier, MINUS has a lower precedence than TIMES. To
    2035deal with this, precedence rules can be given for so-called "fictitious tokens"
    2036like this:
    1893
    1894<blockquote>
    1895<pre>
    1896precedence = (
    1897 ('left', 'PLUS', 'MINUS'),
    1898 ('left', 'TIMES', 'DIVIDE'),
    1899 ('right', 'UMINUS'), # Unary minus operator
    1900)

    --- 72 unchanged lines hidden (view full) ---

    1973</blockquote>
    1974
    1975For example, if you wrote "a = 5", the parser can't figure out if this
    1976is supposed to be reduced as <tt>assignment : ID EQUALS NUMBER</tt> or
    1977whether it's supposed to reduce the 5 as an expression and then reduce
    1978the rule <tt>assignment : ID EQUALS expression</tt>.
    1979
    1980<p>
    2037
    2038<blockquote>
    2039<pre>
    2040precedence = (
    2041 ('left', 'PLUS', 'MINUS'),
    2042 ('left', 'TIMES', 'DIVIDE'),
    2043 ('right', 'UMINUS'), # Unary minus operator
    2044)

    --- 72 unchanged lines hidden (view full) ---

    2117</blockquote>
    2118
    2119For example, if you wrote "a = 5", the parser can't figure out if this
    2120is supposed to be reduced as <tt>assignment : ID EQUALS NUMBER</tt> or
    2121whether it's supposed to reduce the 5 as an expression and then reduce
    2122the rule <tt>assignment : ID EQUALS expression</tt>.
    2123
    2124<p>
    1981It should be noted that reduce/reduce conflicts are notoriously difficult to spot
    1982simply looking at the input grammer. To locate these, it is usually easier to look at the
    1983<tt>parser.out</tt> debugging file with an appropriately high level of caffeination.
    2125It should be noted that reduce/reduce conflicts are notoriously
    2126difficult to spot simply looking at the input grammer. When a
    2127reduce/reduce conflict occurs, <tt>yacc()</tt> will try to help by
    2128printing a warning message such as this:
    1984
    2129
    1985<H3><a name="ply_nn28"></a>5.7 The parser.out file</H3>
    2130<blockquote>
    2131<pre>
    2132WARNING: 1 reduce/reduce conflict
    2133WARNING: reduce/reduce conflict in state 15 resolved using rule (assignment -> ID EQUALS NUMBER)
    2134WARNING: rejected rule (expression -> NUMBER)
    2135</pre>
    2136</blockquote>
    1986
    2137
    2138This message identifies the two rules that are in conflict. However,
    2139it may not tell you how the parser arrived at such a state. To try
    2140and figure it out, you'll probably have to look at your grammar and
    2141the contents of the
    2142<tt>parser.out</tt> debugging file with an appropriately high level of
    2143caffeination.
    1987
    2144
    2145<H3><a name="ply_nn28"></a>6.7 The parser.out file</H3>
    2146
    2147
    1988Tracking down shift/reduce and reduce/reduce conflicts is one of the finer pleasures of using an LR
    1989parsing algorithm. To assist in debugging, <tt>yacc.py</tt> creates a debugging file called
    1990'parser.out' when it generates the parsing table. The contents of this file look like the following:
    1991
    1992<blockquote>
    1993<pre>
    1994Unused terminals:
    1995

    --- 239 unchanged lines hidden (view full) ---

    2235 PLUS reduce using rule 6
    2236 MINUS reduce using rule 6
    2237 TIMES reduce using rule 6
    2238 DIVIDE reduce using rule 6
    2239 RPAREN reduce using rule 6
    2240</pre>
    2241</blockquote>
    2242
    2148Tracking down shift/reduce and reduce/reduce conflicts is one of the finer pleasures of using an LR
    2149parsing algorithm. To assist in debugging, <tt>yacc.py</tt> creates a debugging file called
    2150'parser.out' when it generates the parsing table. The contents of this file look like the following:
    2151
    2152<blockquote>
    2153<pre>
    2154Unused terminals:
    2155

    --- 239 unchanged lines hidden (view full) ---

    2395 PLUS reduce using rule 6
    2396 MINUS reduce using rule 6
    2397 TIMES reduce using rule 6
    2398 DIVIDE reduce using rule 6
    2399 RPAREN reduce using rule 6
    2400</pre>
    2401</blockquote>
    2402
    2243In the file, each state of the grammar is described. Within each state the "." indicates the current
    2244location of the parse within any applicable grammar rules. In addition, the actions for each valid
    2245input token are listed. When a shift/reduce or reduce/reduce conflict arises, rules <em>not</em> selected
    2246are prefixed with an !. For example:
    2403The different states that appear in this file are a representation of
    2404every possible sequence of valid input tokens allowed by the grammar.
    2405When receiving input tokens, the parser is building up a stack and
    2406looking for matching rules. Each state keeps track of the grammar
    2407rules that might be in the process of being matched at that point. Within each
    2408rule, the "." character indicates the current location of the parse
    2409within that rule. In addition, the actions for each valid input token
    2410are listed. When a shift/reduce or reduce/reduce conflict arises,
    2411rules <em>not</em> selected are prefixed with an !. For example:
    2247
    2248<blockquote>
    2249<pre>
    2250 ! TIMES [ reduce using rule 2 ]
    2251 ! DIVIDE [ reduce using rule 2 ]
    2252 ! PLUS [ shift and go to state 6 ]
    2253 ! MINUS [ shift and go to state 5 ]
    2254</pre>
    2255</blockquote>
    2256
    2257By looking at these rules (and with a little practice), you can usually track down the source
    2258of most parsing conflicts. It should also be stressed that not all shift-reduce conflicts are
    2259bad. However, the only way to be sure that they are resolved correctly is to look at <tt>parser.out</tt>.
    2260
    2412
    2413<blockquote>
    2414<pre>
    2415 ! TIMES [ reduce using rule 2 ]
    2416 ! DIVIDE [ reduce using rule 2 ]
    2417 ! PLUS [ shift and go to state 6 ]
    2418 ! MINUS [ shift and go to state 5 ]
    2419</pre>
    2420</blockquote>
    2421
    2422By looking at these rules (and with a little practice), you can usually track down the source
    2423of most parsing conflicts. It should also be stressed that not all shift-reduce conflicts are
    2424bad. However, the only way to be sure that they are resolved correctly is to look at <tt>parser.out</tt>.
    2425
    2261<H3><a name="ply_nn29"></a>5.8 Syntax Error Handling</H3>
    2426<H3><a name="ply_nn29"></a>6.8 Syntax Error Handling</H3>
    2262
    2263
    2427
    2428
    2264When a syntax error occurs during parsing, the error is immediately
    2429If you are creating a parser for production use, the handling of
    2430syntax errors is important. As a general rule, you don't want a
    2431parser to simply throw up its hands and stop at the first sign of
    2432trouble. Instead, you want it to report the error, recover if possible, and
    2433continue parsing so that all of the errors in the input get reported
    2434to the user at once. This is the standard behavior found in compilers
    2435for languages such as C, C++, and Java.
    2436
    2437In PLY, when a syntax error occurs during parsing, the error is immediately
    2265detected (i.e., the parser does not read any more tokens beyond the
    2438detected (i.e., the parser does not read any more tokens beyond the
    2266source of the error). Error recovery in LR parsers is a delicate
    2439source of the error). However, at this point, the parser enters a
    2440recovery mode that can be used to try and continue further parsing.
    2441As a general rule, error recovery in LR parsers is a delicate
    2267topic that involves ancient rituals and black-magic. The recovery mechanism
    2268provided by <tt>yacc.py</tt> is comparable to Unix yacc so you may want
    2269consult a book like O'Reilly's "Lex and Yacc" for some of the finer details.
    2270
    2271<p>
    2272When a syntax error occurs, <tt>yacc.py</tt> performs the following steps:
    2273
    2274<ol>
    2275<li>On the first occurrence of an error, the user-defined <tt>p_error()</tt> function
    2442topic that involves ancient rituals and black-magic. The recovery mechanism
    2443provided by <tt>yacc.py</tt> is comparable to Unix yacc so you may want
    2444consult a book like O'Reilly's "Lex and Yacc" for some of the finer details.
    2445
    2446<p>
    2447When a syntax error occurs, <tt>yacc.py</tt> performs the following steps:
    2448
    2449<ol>
    2450<li>On the first occurrence of an error, the user-defined <tt>p_error()</tt> function
    2276is called with the offending token as an argument. Afterwards, the parser enters
    2451is called with the offending token as an argument. However, if the syntax error is due to
    2452reaching the end-of-file, <tt>p_error()</tt> is called with an argument of <tt>None</tt>.
    2453Afterwards, the parser enters
    2277an "error-recovery" mode in which it will not make future calls to <tt>p_error()</tt> until it
    2278has successfully shifted at least 3 tokens onto the parsing stack.
    2279
    2280<p>
    2281<li>If no recovery action is taken in <tt>p_error()</tt>, the offending lookahead token is replaced
    2282with a special <tt>error</tt> token.
    2283
    2284<p>

    --- 8 unchanged lines hidden (view full) ---

    2293<li>If a grammar rule accepts <tt>error</tt> as a token, it will be
    2294shifted onto the parsing stack.
    2295
    2296<p>
    2297<li>If the top item of the parsing stack is <tt>error</tt>, lookahead tokens will be discarded until the
    2298parser can successfully shift a new symbol or reduce a rule involving <tt>error</tt>.
    2299</ol>
    2300
    2454an "error-recovery" mode in which it will not make future calls to <tt>p_error()</tt> until it
    2455has successfully shifted at least 3 tokens onto the parsing stack.
    2456
    2457<p>
    2458<li>If no recovery action is taken in <tt>p_error()</tt>, the offending lookahead token is replaced
    2459with a special <tt>error</tt> token.
    2460
    2461<p>

    --- 8 unchanged lines hidden (view full) ---

    2470<li>If a grammar rule accepts <tt>error</tt> as a token, it will be
    2471shifted onto the parsing stack.
    2472
    2473<p>
    2474<li>If the top item of the parsing stack is <tt>error</tt>, lookahead tokens will be discarded until the
    2475parser can successfully shift a new symbol or reduce a rule involving <tt>error</tt>.
    2476</ol>
    2477
    2301<H4><a name="ply_nn30"></a>5.8.1 Recovery and resynchronization with error rules</H4>
    2478<H4><a name="ply_nn30"></a>6.8.1 Recovery and resynchronization with error rules</H4>
    2302
    2303
    2304The most well-behaved approach for handling syntax errors is to write grammar rules that include the <tt>error</tt>
    2305token. For example, suppose your language had a grammar rule for a print statement like this:
    2306
    2307<blockquote>
    2308<pre>
    2309def p_statement_print(p):

    --- 35 unchanged lines hidden (view full) ---

    2345 print "Syntax error in print statement. Bad expression"
    2346</pre>
    2347</blockquote>
    2348
    2349This is because the first bad token encountered will cause the rule to
    2350be reduced--which may make it difficult to recover if more bad tokens
    2351immediately follow.
    2352
    2479
    2480
    2481The most well-behaved approach for handling syntax errors is to write grammar rules that include the <tt>error</tt>
    2482token. For example, suppose your language had a grammar rule for a print statement like this:
    2483
    2484<blockquote>
    2485<pre>
    2486def p_statement_print(p):

    --- 35 unchanged lines hidden (view full) ---

    2522 print "Syntax error in print statement. Bad expression"
    2523</pre>
    2524</blockquote>
    2525
    2526This is because the first bad token encountered will cause the rule to
    2527be reduced--which may make it difficult to recover if more bad tokens
    2528immediately follow.
    2529
    2353<H4><a name="ply_nn31"></a>5.8.2 Panic mode recovery</H4>
    2530<H4><a name="ply_nn31"></a>6.8.2 Panic mode recovery</H4>
    2354
    2355
    2356An alternative error recovery scheme is to enter a panic mode recovery in which tokens are
    2357discarded to a point where the parser might be able to recover in some sensible manner.
    2358
    2359<p>
    2360Panic mode recovery is implemented entirely in the <tt>p_error()</tt> function. For example, this
    2361function starts discarding tokens until it reaches a closing '}'. Then, it restarts the

    --- 56 unchanged lines hidden (view full) ---

    2418 if not tok or tok.type == 'SEMI': break
    2419 yacc.errok()
    2420
    2421 # Return SEMI to the parser as the next lookahead token
    2422 return tok
    2423</pre>
    2424</blockquote>
    2425
    2531
    2532
    2533An alternative error recovery scheme is to enter a panic mode recovery in which tokens are
    2534discarded to a point where the parser might be able to recover in some sensible manner.
    2535
    2536<p>
    2537Panic mode recovery is implemented entirely in the <tt>p_error()</tt> function. For example, this
    2538function starts discarding tokens until it reaches a closing '}'. Then, it restarts the

    --- 56 unchanged lines hidden (view full) ---

    2595 if not tok or tok.type == 'SEMI': break
    2596 yacc.errok()
    2597
    2598 # Return SEMI to the parser as the next lookahead token
    2599 return tok
    2600</pre>
    2601</blockquote>
    2602
    2426<H4><a name="ply_nn32"></a>5.8.3 General comments on error handling</H4>
    2603<H4><a name="ply_nn35"></a>6.8.3 Signaling an error from a production</H4>
    2427
    2428
    2604
    2605
    2606If necessary, a production rule can manually force the parser to enter error recovery. This
    2607is done by raising the <tt>SyntaxError</tt> exception like this:
    2608
    2609<blockquote>
    2610<pre>
    2611def p_production(p):
    2612 'production : some production ...'
    2613 raise SyntaxError
    2614</pre>
    2615</blockquote>
    2616
    2617The effect of raising <tt>SyntaxError</tt> is the same as if the last symbol shifted onto the
    2618parsing stack was actually a syntax error. Thus, when you do this, the last symbol shifted is popped off
    2619of the parsing stack and the current lookahead token is set to an <tt>error</tt> token. The parser
    2620then enters error-recovery mode where it tries to reduce rules that can accept <tt>error</tt> tokens.
    2621The steps that follow from this point are exactly the same as if a syntax error were detected and
    2622<tt>p_error()</tt> were called.
    2623
    2624<P>
    2625One important aspect of manually setting an error is that the <tt>p_error()</tt> function will <b>NOT</b> be
    2626called in this case. If you need to issue an error message, make sure you do it in the production that
    2627raises <tt>SyntaxError</tt>.
    2628
    2629<P>
    2630Note: This feature of PLY is meant to mimic the behavior of the YYERROR macro in yacc.
    2631
    2632
    2633<H4><a name="ply_nn32"></a>6.8.4 General comments on error handling</H4>
    2634
    2635
    2429For normal types of languages, error recovery with error rules and resynchronization characters is probably the most reliable
    2430technique. This is because you can instrument the grammar to catch errors at selected places where it is relatively easy
    2431to recover and continue parsing. Panic mode recovery is really only useful in certain specialized applications where you might want
    2432to discard huge portions of the input text to find a valid restart point.
    2433
    2636For normal types of languages, error recovery with error rules and resynchronization characters is probably the most reliable
    2637technique. This is because you can instrument the grammar to catch errors at selected places where it is relatively easy
    2638to recover and continue parsing. Panic mode recovery is really only useful in certain specialized applications where you might want
    2639to discard huge portions of the input text to find a valid restart point.
    2640
    2434<H3><a name="ply_nn33"></a>5.9 Line Number and Position Tracking</H3>
    2641<H3><a name="ply_nn33"></a>6.9 Line Number and Position Tracking</H3>
    2435
    2642
    2436Position tracking is often a tricky problem when writing compilers. By default, PLY tracks the line number and position of
    2437all tokens. This information is available using the following functions:
    2438
    2643
    2644Position tracking is often a tricky problem when writing compilers.
    2645By default, PLY tracks the line number and position of all tokens.
    2646This information is available using the following functions:
    2647
    2439<ul>
    2440<li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em>
    2441<li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em>
    2442</ul>
    2443
    2444For example:
    2445
    2446<blockquote>
    2447<pre>
    2448def p_expression(p):
    2449 'expression : expression PLUS expression'
    2450 line = p.lineno(2) # line number of the PLUS token
    2451 index = p.lexpos(2) # Position of the PLUS token
    2452</pre>
    2453</blockquote>
    2454
    2648<ul>
    2649<li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em>
    2650<li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em>
    2651</ul>
    2652
    2653For example:
    2654
    2655<blockquote>
    2656<pre>
    2657def p_expression(p):
    2658 'expression : expression PLUS expression'
    2659 line = p.lineno(2) # line number of the PLUS token
    2660 index = p.lexpos(2) # Position of the PLUS token
    2661</pre>
    2662</blockquote>
    2663
    2455As an optional feature, <tt>yacc.py</tt> can automatically track line numbers and positions for all of the grammar symbols
    2456as well. However, this
    2457extra tracking requires extra processing and can significantly slow down parsing. Therefore, it must be enabled by passing the
    2664As an optional feature, yacc.py can automatically track line
    2665numbers and positions for all of the grammar symbols as well.
    2666However, this extra tracking requires extra processing and can
    2667significantly slow down parsing. Therefore, it must be enabled by
    2668passing the
    2458<tt>tracking=True</tt> option to <tt>yacc.parse()</tt>. For example:
    2459
    2460<blockquote>
    2461<pre>
    2462yacc.parse(data,tracking=True)
    2463</pre>
    2464</blockquote>
    2465
    2669<tt>tracking=True</tt> option to <tt>yacc.parse()</tt>. For example:
    2670
    2671<blockquote>
    2672<pre>
    2673yacc.parse(data,tracking=True)
    2674</pre>
    2675</blockquote>
    2676
    2466Once enabled, the <tt>lineno()</tt> and <tt>lexpos()</tt> methods work for all grammar symbols. In addition, two
    2467additional methods can be used:
    2677Once enabled, the lineno() and lexpos() methods work
    2678for all grammar symbols. In addition, two additional methods can be
    2679used:
    2468
    2469<ul>
    2470<li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>.
    2471<li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>.
    2472</ul>
    2473
    2474For example:
    2475

    --- 25 unchanged lines hidden (view full) ---

    2501def p_bad_func(p):
    2502 'funccall : fname LPAREN error RPAREN'
    2503 # Line number reported from LPAREN token
    2504 print "Bad function call at line", p.lineno(2)
    2505</pre>
    2506</blockquote>
    2507
    2508<p>
    2680
    2681<ul>
    2682<li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>.
    2683<li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>.
    2684</ul>
    2685
    2686For example:
    2687

    --- 25 unchanged lines hidden (view full) ---

    2713def p_bad_func(p):
    2714 'funccall : fname LPAREN error RPAREN'
    2715 # Line number reported from LPAREN token
    2716 print "Bad function call at line", p.lineno(2)
    2717</pre>
    2718</blockquote>
    2719
    2720<p>
    2509Similarly, you may get better parsing performance if you only propagate line number
    2510information where it's needed. For example:
    2721Similarly, you may get better parsing performance if you only
    2722selectively propagate line number information where it's needed using
    2723the <tt>p.set_lineno()</tt> method. For example:
    2511
    2512<blockquote>
    2513<pre>
    2514def p_fname(p):
    2515 'fname : ID'
    2724
    2725<blockquote>
    2726<pre>
    2727def p_fname(p):
    2728 'fname : ID'
    2516 p[0] = (p[1],p.lineno(1))
    2729 p[0] = p[1]
    2730 p.set_lineno(0,p.lineno(1))
    2517</pre>
    2518</blockquote>
    2519
    2731</pre>
    2732</blockquote>
    2733
    2520Finally, it should be noted that PLY does not store position information after a rule has been
    2521processed. If it is important for you to retain this information in an abstract syntax tree, you
    2522must make your own copy.
    2734PLY doesn't retain line number information from rules that have already been
    2735parsed. If you are building an abstract syntax tree and need to have line numbers,
    2736you should make sure that the line numbers appear in the tree itself.
    2523
    2737
    2524<H3><a name="ply_nn34"></a>5.10 AST Construction</H3>
    2738<H3><a name="ply_nn34"></a>6.10 AST Construction</H3>
    2525
    2526
    2739
    2740
    2527<tt>yacc.py</tt> provides no special functions for constructing an abstract syntax tree. However, such
    2528construction is easy enough to do on your own. Simply create a data structure for abstract syntax tree nodes
    2529and assign nodes to <tt>p[0]</tt> in each rule.
    2741yacc.py provides no special functions for constructing an
    2742abstract syntax tree. However, such construction is easy enough to do
    2743on your own.
    2530
    2744
    2531For example:
    2745<p>A minimal way to construct a tree is to simply create and
    2746propagate a tuple or list in each grammar rule function. There
    2747are many possible ways to do this, but one example would be something
    2748like this:
    2532
    2533<blockquote>
    2534<pre>
    2749
    2750<blockquote>
    2751<pre>
    2752def p_expression_binop(p):
    2753 '''expression : expression PLUS expression
    2754 | expression MINUS expression
    2755 | expression TIMES expression
    2756 | expression DIVIDE expression'''
    2757
    2758 p[0] = ('binary-expression',p[2],p[1],p[3])
    2759
    2760def p_expression_group(p):
    2761 'expression : LPAREN expression RPAREN'
    2762 p[0] = ('group-expression',p[2])
    2763
    2764def p_expression_number(p):
    2765 'expression : NUMBER'
    2766 p[0] = ('number-expression',p[1])
    2767</pre>
    2768</blockquote>
    2769
    2770<p>
    2771Another approach is to create a set of data structure for different
    2772kinds of abstract syntax tree nodes and assign nodes to <tt>p[0]</tt>
    2773in each rule. For example:
    2774
    2775<blockquote>
    2776<pre>
    2535class Expr: pass
    2536
    2537class BinOp(Expr):
    2538 def __init__(self,left,op,right):
    2539 self.type = "binop"
    2540 self.left = left
    2541 self.right = right
    2542 self.op = op

    --- 16 unchanged lines hidden (view full) ---

    2559 p[0] = p[2]
    2560
    2561def p_expression_number(p):
    2562 'expression : NUMBER'
    2563 p[0] = Number(p[1])
    2564</pre>
    2565</blockquote>
    2566
    2777class Expr: pass
    2778
    2779class BinOp(Expr):
    2780 def __init__(self,left,op,right):
    2781 self.type = "binop"
    2782 self.left = left
    2783 self.right = right
    2784 self.op = op

    --- 16 unchanged lines hidden (view full) ---

    2801 p[0] = p[2]
    2802
    2803def p_expression_number(p):
    2804 'expression : NUMBER'
    2805 p[0] = Number(p[1])
    2806</pre>
    2807</blockquote>
    2808
    2567To simplify tree traversal, it may make sense to pick a very generic tree structure for your parse tree nodes.
    2568For example:
    2809The advantage to this approach is that it may make it easier to attach more complicated
    2810semantics, type checking, code generation, and other features to the node classes.
    2569
    2811
    2812<p>
    2813To simplify tree traversal, it may make sense to pick a very generic
    2814tree structure for your parse tree nodes. For example:
    2815
    2570<blockquote>
    2571<pre>
    2572class Node:
    2573 def __init__(self,type,children=None,leaf=None):
    2574 self.type = type
    2575 if children:
    2576 self.children = children
    2577 else:

    --- 5 unchanged lines hidden (view full) ---

    2583 | expression MINUS expression
    2584 | expression TIMES expression
    2585 | expression DIVIDE expression'''
    2586
    2587 p[0] = Node("binop", [p[1],p[3]], p[2])
    2588</pre>
    2589</blockquote>
    2590
    2816<blockquote>
    2817<pre>
    2818class Node:
    2819 def __init__(self,type,children=None,leaf=None):
    2820 self.type = type
    2821 if children:
    2822 self.children = children
    2823 else:

    --- 5 unchanged lines hidden (view full) ---

    2829 | expression MINUS expression
    2830 | expression TIMES expression
    2831 | expression DIVIDE expression'''
    2832
    2833 p[0] = Node("binop", [p[1],p[3]], p[2])
    2834</pre>
    2835</blockquote>
    2836
    2591<H3><a name="ply_nn35"></a>5.11 Embedded Actions</H3>
    2837<H3><a name="ply_nn35"></a>6.11 Embedded Actions</H3>
    2592
    2593
    2594The parsing technique used by yacc only allows actions to be executed at the end of a rule. For example,
    2595suppose you have a rule like this:
    2596
    2597<blockquote>
    2598<pre>
    2599def p_foo(p):
    2600 "foo : A B C D"
    2601 print "Parsed a foo", p[1],p[2],p[3],p[4]
    2602</pre>
    2603</blockquote>
    2604
    2605<p>
    2606In this case, the supplied action code only executes after all of the
    2607symbols <tt>A</tt>, <tt>B</tt>, <tt>C</tt>, and <tt>D</tt> have been
    2608parsed. Sometimes, however, it is useful to execute small code
    2609fragments during intermediate stages of parsing. For example, suppose
    2610you wanted to perform some action immediately after <tt>A</tt> has
    2838
    2839
    2840The parsing technique used by yacc only allows actions to be executed at the end of a rule. For example,
    2841suppose you have a rule like this:
    2842
    2843<blockquote>
    2844<pre>
    2845def p_foo(p):
    2846 "foo : A B C D"
    2847 print "Parsed a foo", p[1],p[2],p[3],p[4]
    2848</pre>
    2849</blockquote>
    2850
    2851<p>
    2852In this case, the supplied action code only executes after all of the
    2853symbols <tt>A</tt>, <tt>B</tt>, <tt>C</tt>, and <tt>D</tt> have been
    2854parsed. Sometimes, however, it is useful to execute small code
    2855fragments during intermediate stages of parsing. For example, suppose
    2856you wanted to perform some action immediately after <tt>A</tt> has
    2611been parsed. To do this, you can write a empty rule like this:
    2857been parsed. To do this, write an empty rule like this:
    2612
    2613<blockquote>
    2614<pre>
    2615def p_foo(p):
    2616 "foo : A seen_A B C D"
    2617 print "Parsed a foo", p[1],p[3],p[4],p[5]
    2618 print "seen_A returned", p[2]
    2619

    --- 46 unchanged lines hidden (view full) ---

    2666def p_abcx(p):
    2667 "abcx : A B seen_AB C X"
    2668
    2669def p_seen_AB(p):
    2670 "seen_AB :"
    2671</pre>
    2672</blockquote>
    2673
    2858
    2859<blockquote>
    2860<pre>
    2861def p_foo(p):
    2862 "foo : A seen_A B C D"
    2863 print "Parsed a foo", p[1],p[3],p[4],p[5]
    2864 print "seen_A returned", p[2]
    2865

    --- 46 unchanged lines hidden (view full) ---

    2912def p_abcx(p):
    2913 "abcx : A B seen_AB C X"
    2914
    2915def p_seen_AB(p):
    2916 "seen_AB :"
    2917</pre>
    2918</blockquote>
    2919
    2674an extra shift-reduce conflict will be introduced. This conflict is caused by the fact that the same symbol <tt>C</tt> appears next in
    2675both the <tt>abcd</tt> and <tt>abcx</tt> rules. The parser can either shift the symbol (<tt>abcd</tt> rule) or reduce the empty rule <tt>seen_AB</tt> (<tt>abcx</tt> rule).
    2920an extra shift-reduce conflict will be introduced. This conflict is
    2921caused by the fact that the same symbol <tt>C</tt> appears next in
    2922both the <tt>abcd</tt> and <tt>abcx</tt> rules. The parser can either
    2923shift the symbol (<tt>abcd</tt> rule) or reduce the empty
    2924rule <tt>seen_AB</tt> (<tt>abcx</tt> rule).
    2676
    2677<p>
    2678A common use of embedded rules is to control other aspects of parsing
    2679such as scoping of local variables. For example, if you were parsing C code, you might
    2680write code like this:
    2681
    2682<blockquote>
    2683<pre>

    --- 7 unchanged lines hidden (view full) ---

    2691 "new_scope :"
    2692 # Create a new scope for local variables
    2693 s = new_scope()
    2694 push_scope(s)
    2695 ...
    2696</pre>
    2697</blockquote>
    2698
    2925
    2926<p>
    2927A common use of embedded rules is to control other aspects of parsing
    2928such as scoping of local variables. For example, if you were parsing C code, you might
    2929write code like this:
    2930
    2931<blockquote>
    2932<pre>

    --- 7 unchanged lines hidden (view full) ---

    2940 "new_scope :"
    2941 # Create a new scope for local variables
    2942 s = new_scope()
    2943 push_scope(s)
    2944 ...
    2945</pre>
    2946</blockquote>
    2947
    2699In this case, the embedded action <tt>new_scope</tt> executes immediately after a <tt>LBRACE</tt> (<tt>{</tt>) symbol is parsed. This might
    2700adjust internal symbol tables and other aspects of the parser. Upon completion of the rule <tt>statements_block</tt>, code might undo the operations performed in the embedded action (e.g., <tt>pop_scope()</tt>).
    2948In this case, the embedded action new_scope executes
    2949immediately after a <tt>LBRACE</tt> (<tt>{</tt>) symbol is parsed.
    2950This might adjust internal symbol tables and other aspects of the
    2951parser. Upon completion of the rule <tt>statements_block</tt>, code
    2952might undo the operations performed in the embedded action
    2953(e.g., <tt>pop_scope()</tt>).
    2701
    2954
    2702<H3><a name="ply_nn36"></a>5.12 Yacc implementation notes</H3>
    2955<H3><a name="ply_nn36"></a>6.12 Miscellaneous Yacc Notes</H3>
    2703
    2704
    2705<ul>
    2706<li>The default parsing method is LALR. To use SLR instead, run yacc() as follows:
    2707
    2708<blockquote>
    2709<pre>
    2710yacc.yacc(method="SLR")

    --- 54 unchanged lines hidden (view full) ---

    2765Note: If you disable table generation, yacc() will regenerate the parsing tables
    2766each time it runs (which may take awhile depending on how large your grammar is).
    2767
    2768<P>
    2769<li>To print copious amounts of debugging during parsing, use:
    2770
    2771<blockquote>
    2772<pre>
    2956
    2957
    2958<ul>
    2959<li>The default parsing method is LALR. To use SLR instead, run yacc() as follows:
    2960
    2961<blockquote>
    2962<pre>
    2963yacc.yacc(method="SLR")

    --- 54 unchanged lines hidden (view full) ---

    3018Note: If you disable table generation, yacc() will regenerate the parsing tables
    3019each time it runs (which may take awhile depending on how large your grammar is).
    3020
    3021<P>
    3022<li>To print copious amounts of debugging during parsing, use:
    3023
    3024<blockquote>
    3025<pre>
    2773yacc.parse(debug=1)
    3026yacc.parse(debug=1)
    2774</pre>
    2775</blockquote>
    2776
    2777<p>
    3027</pre>
    3028</blockquote>
    3029
    3030<p>
    2778<li>To redirect the debugging output to a filename of your choosing, use:
    2779
    2780<blockquote>
    2781<pre>
    2782yacc.parse(debug=1, debugfile="debugging.out")
    2783</pre>
    2784</blockquote>
    2785
    2786<p>
    2787<li>The <tt>yacc.yacc()</tt> function really returns a parser object. If you want to support multiple
    2788parsers in the same application, do this:
    2789
    2790<blockquote>
    2791<pre>
    2792p = yacc.yacc()
    2793...
    2794p.parse()

    --- 12 unchanged lines hidden (view full) ---

    2807and several hundred states. For more complex languages such as C, table generation may take 30-60 seconds on a slow
    2808machine. Please be patient.
    2809
    2810<p>
    2811<li>Since LR parsing is driven by tables, the performance of the parser is largely independent of the
    2812size of the grammar. The biggest bottlenecks will be the lexer and the complexity of the code in your grammar rules.
    2813</ul>
    2814
    3031<li>The <tt>yacc.yacc()</tt> function really returns a parser object. If you want to support multiple
    3032parsers in the same application, do this:
    3033
    3034<blockquote>
    3035<pre>
    3036p = yacc.yacc()
    3037...
    3038p.parse()

    --- 12 unchanged lines hidden (view full) ---

    3051and several hundred states. For more complex languages such as C, table generation may take 30-60 seconds on a slow
    3052machine. Please be patient.
    3053
    3054<p>
    3055<li>Since LR parsing is driven by tables, the performance of the parser is largely independent of the
    3056size of the grammar. The biggest bottlenecks will be the lexer and the complexity of the code in your grammar rules.
    3057</ul>
    3058
    2815<H2><a name="ply_nn37"></a>6. Parser and Lexer State Management</H2>
    3059<H2><a name="ply_nn37"></a>7. Multiple Parsers and Lexers</H2>
    2816
    2817
    2818In advanced parsing applications, you may want to have multiple
    3060
    3061
    3062In advanced parsing applications, you may want to have multiple
    2819parsers and lexers. Furthermore, the parser may want to control the
    2820behavior of the lexer in some way.
    3063parsers and lexers.
    2821
    2822<p>
    3064
    3065<p>
    2823To do this, it is important to note that both the lexer and parser are
    2824actually implemented as objects. These objects are returned by the
    2825<tt>lex()</tt> and <tt>yacc()</tt> functions respectively. For example:
    3066As a general rules this isn't a problem. However, to make it work,
    3067you need to carefully make sure everything gets hooked up correctly.
    3068First, make sure you save the objects returned by <tt>lex()</tt> and
    3069<tt>yacc()</tt>. For example:
    2826
    2827<blockquote>
    2828<pre>
    2829lexer = lex.lex() # Return lexer object
    2830parser = yacc.yacc() # Return parser object
    2831</pre>
    2832</blockquote>
    2833
    3070
    3071<blockquote>
    3072<pre>
    3073lexer = lex.lex() # Return lexer object
    3074parser = yacc.yacc() # Return parser object
    3075</pre>
    3076</blockquote>
    3077
    2834To attach the lexer and parser together, make sure you use the <tt>lexer</tt> argumemnt to parse. For example:
    3078Next, when parsing, make sure you give the <tt>parse()</tt> function a reference to the lexer it
    3079should be using. For example:
    2835
    2836<blockquote>
    2837<pre>
    2838parser.parse(text,lexer=lexer)
    2839</pre>
    2840</blockquote>
    2841
    3080
    3081<blockquote>
    3082<pre>
    3083parser.parse(text,lexer=lexer)
    3084</pre>
    3085</blockquote>
    3086
    2842Within lexer and parser rules, these objects are also available. In the lexer,
    2843the "lexer" attribute of a token refers to the lexer object in use. For example:
    3087If you forget to do this, the parser will use the last lexer
    3088created--which is not always what you want.
    2844
    3089
    3090<p>
    3091Within lexer and parser rule functions, these objects are also
    3092available. In the lexer, the "lexer" attribute of a token refers to
    3093the lexer object that triggered the rule. For example:
    3094
    2845<blockquote>
    2846<pre>
    2847def t_NUMBER(t):
    2848 r'\d+'
    2849 ...
    2850 print t.lexer # Show lexer object
    2851</pre>
    2852</blockquote>

    --- 10 unchanged lines hidden (view full) ---

    2863 print p.lexer # Show lexer object
    2864</pre>
    2865</blockquote>
    2866
    2867If necessary, arbitrary attributes can be attached to the lexer or parser object.
    2868For example, if you wanted to have different parsing modes, you could attach a mode
    2869attribute to the parser object and look at it later.
    2870
    3095<blockquote>
    3096<pre>
    3097def t_NUMBER(t):
    3098 r'\d+'
    3099 ...
    3100 print t.lexer # Show lexer object
    3101</pre>
    3102</blockquote>

    --- 10 unchanged lines hidden (view full) ---

    3113 print p.lexer # Show lexer object
    3114</pre>
    3115</blockquote>
    3116
    3117If necessary, arbitrary attributes can be attached to the lexer or parser object.
    3118For example, if you wanted to have different parsing modes, you could attach a mode
    3119attribute to the parser object and look at it later.
    3120
    2871<H2><a name="ply_nn38"></a>7. Using Python's Optimized Mode</H2>
    3121<H2><a name="ply_nn38"></a>8. Using Python's Optimized Mode</H2>
    2872
    2873
    2874Because PLY uses information from doc-strings, parsing and lexing
    2875information must be gathered while running the Python interpreter in
    2876normal mode (i.e., not with the -O or -OO options). However, if you
    2877specify optimized mode like this:
    2878
    2879<blockquote>

    --- 6 unchanged lines hidden (view full) ---

    2886then PLY can later be used when Python runs in optimized mode. To make this work,
    2887make sure you first run Python in normal mode. Once the lexing and parsing tables
    2888have been generated the first time, run Python in optimized mode. PLY will use
    2889the tables without the need for doc strings.
    2890
    2891<p>
    2892Beware: running PLY in optimized mode disables a lot of error
    2893checking. You should only do this when your project has stabilized
    3122
    3123
    3124Because PLY uses information from doc-strings, parsing and lexing
    3125information must be gathered while running the Python interpreter in
    3126normal mode (i.e., not with the -O or -OO options). However, if you
    3127specify optimized mode like this:
    3128
    3129<blockquote>

    --- 6 unchanged lines hidden (view full) ---

    3136then PLY can later be used when Python runs in optimized mode. To make this work,
    3137make sure you first run Python in normal mode. Once the lexing and parsing tables
    3138have been generated the first time, run Python in optimized mode. PLY will use
    3139the tables without the need for doc strings.
    3140
    3141<p>
    3142Beware: running PLY in optimized mode disables a lot of error
    3143checking. You should only do this when your project has stabilized
    2894and you don't need to do any debugging.
    2895
    2896<H2><a name="ply_nn39"></a>8. Where to go from here?</H2>
    3144and you don't need to do any debugging. One of the purposes of
    3145optimized mode is to substantially decrease the startup time of
    3146your compiler (by assuming that everything is already properly
    3147specified and works).
    2897
    3148
    3149<H2><a name="ply_nn44"></a>9. Advanced Debugging</H2>
    2898
    3150
    3151
    3152<p>
    3153Debugging a compiler is typically not an easy task. PLY provides some
    3154advanced diagonistic capabilities through the use of Python's
    3155<tt>logging</tt> module. The next two sections describe this:
    3156
    3157<H3><a name="ply_nn45"></a>9.1 Debugging the lex() and yacc() commands</H3>
    3158
    3159
    3160<p>
    3161Both the <tt>lex()</tt> and <tt>yacc()</tt> commands have a debugging
    3162mode that can be enabled using the <tt>debug</tt> flag. For example:
    3163
    3164<blockquote>
    3165<pre>
    3166lex.lex(debug=True)
    3167yacc.yacc(debug=True)
    3168</pre>
    3169</blockquote>
    3170
    3171Normally, the output produced by debugging is routed to either
    3172standard error or, in the case of <tt>yacc()</tt>, to a file
    3173<tt>parser.out</tt>. This output can be more carefully controlled
    3174by supplying a logging object. Here is an example that adds
    3175information about where different debugging messages are coming from:
    3176
    3177<blockquote>
    3178<pre>
    3179# Set up a logging object
    3180import logging
    3181logging.basicConfig(
    3182 level = logging.DEBUG,
    3183 filename = "parselog.txt",
    3184 filemode = "w",
    3185 format = "%(filename)10s:%(lineno)4d:%(message)s"
    3186)
    3187log = logging.getLogger()
    3188
    3189lex.lex(debug=True,debuglog=log)
    3190yacc.yacc(debug=True,debuglog=log)
    3191</pre>
    3192</blockquote>
    3193
    3194If you supply a custom logger, the amount of debugging
    3195information produced can be controlled by setting the logging level.
    3196Typically, debugging messages are either issued at the <tt>DEBUG</tt>,
    3197<tt>INFO</tt>, or <tt>WARNING</tt> levels.
    3198
    3199<p>
    3200PLY's error messages and warnings are also produced using the logging
    3201interface. This can be controlled by passing a logging object
    3202using the <tt>errorlog</tt> parameter.
    3203
    3204<blockquote>
    3205<pre>
    3206lex.lex(errorlog=log)
    3207yacc.yacc(errorlog=log)
    3208</pre>
    3209</blockquote>
    3210
    3211If you want to completely silence warnings, you can either pass in a
    3212logging object with an appropriate filter level or use the <tt>NullLogger</tt>
    3213object defined in either <tt>lex</tt> or <tt>yacc</tt>. For example:
    3214
    3215<blockquote>
    3216<pre>
    3217yacc.yacc(errorlog=yacc.NullLogger())
    3218</pre>
    3219</blockquote>
    3220
    3221<H3><a name="ply_nn46"></a>9.2 Run-time Debugging</H3>
    3222
    3223
    3224<p>
    3225To enable run-time debugging of a parser, use the <tt>debug</tt> option to parse. This
    3226option can either be an integer (which simply turns debugging on or off) or an instance
    3227of a logger object. For example:
    3228
    3229<blockquote>
    3230<pre>
    3231log = logging.getLogger()
    3232parser.parse(input,debug=log)
    3233</pre>
    3234</blockquote>
    3235
    3236If a logging object is passed, you can use its filtering level to control how much
    3237output gets generated. The <tt>INFO</tt> level is used to produce information
    3238about rule reductions. The <tt>DEBUG</tt> level will show information about the
    3239parsing stack, token shifts, and other details. The <tt>ERROR</tt> level shows information
    3240related to parsing errors.
    3241
    3242<p>
    3243For very complicated problems, you should pass in a logging object that
    3244redirects to a file where you can more easily inspect the output after
    3245execution.
    3246
    3247<H2><a name="ply_nn39"></a>10. Where to go from here?</H2>
    3248
    3249
    2899The <tt>examples</tt> directory of the PLY distribution contains several simple examples. Please consult a
    2900compilers textbook for the theory and underlying implementation details or LR parsing.
    2901
    2902</body>
    2903</html>
    2904
    2905
    2906
    2907
    2908
    2909
    2910
    3250The <tt>examples</tt> directory of the PLY distribution contains several simple examples. Please consult a
    3251compilers textbook for the theory and underlying implementation details or LR parsing.
    3252
    3253</body>
    3254</html>
    3255
    3256
    3257
    3258
    3259
    3260
    3261