ply.html revision 6498:e21e9ab5fad0
1<html> 2<head> 3<title>PLY (Python Lex-Yacc)</title> 4</head> 5<body bgcolor="#ffffff"> 6 7<h1>PLY (Python Lex-Yacc)</h1> 8 9<b> 10David M. Beazley <br> 11dave@dabeaz.com<br> 12</b> 13 14<p> 15<b>PLY Version: 3.0</b> 16<p> 17 18<!-- INDEX --> 19<div class="sectiontoc"> 20<ul> 21<li><a href="#ply_nn1">Preface and Requirements</a> 22<li><a href="#ply_nn1">Introduction</a> 23<li><a href="#ply_nn2">PLY Overview</a> 24<li><a href="#ply_nn3">Lex</a> 25<ul> 26<li><a href="#ply_nn4">Lex Example</a> 27<li><a href="#ply_nn5">The tokens list</a> 28<li><a href="#ply_nn6">Specification of tokens</a> 29<li><a href="#ply_nn7">Token values</a> 30<li><a href="#ply_nn8">Discarded tokens</a> 31<li><a href="#ply_nn9">Line numbers and positional information</a> 32<li><a href="#ply_nn10">Ignored characters</a> 33<li><a href="#ply_nn11">Literal characters</a> 34<li><a href="#ply_nn12">Error handling</a> 35<li><a href="#ply_nn13">Building and using the lexer</a> 36<li><a href="#ply_nn14">The @TOKEN decorator</a> 37<li><a href="#ply_nn15">Optimized mode</a> 38<li><a href="#ply_nn16">Debugging</a> 39<li><a href="#ply_nn17">Alternative specification of lexers</a> 40<li><a href="#ply_nn18">Maintaining state</a> 41<li><a href="#ply_nn19">Lexer cloning</a> 42<li><a href="#ply_nn20">Internal lexer state</a> 43<li><a href="#ply_nn21">Conditional lexing and start conditions</a> 44<li><a href="#ply_nn21">Miscellaneous Issues</a> 45</ul> 46<li><a href="#ply_nn22">Parsing basics</a> 47<li><a href="#ply_nn23">Yacc</a> 48<ul> 49<li><a href="#ply_nn24">An example</a> 50<li><a href="#ply_nn25">Combining Grammar Rule Functions</a> 51<li><a href="#ply_nn26">Character Literals</a> 52<li><a href="#ply_nn26">Empty Productions</a> 53<li><a href="#ply_nn28">Changing the starting symbol</a> 54<li><a href="#ply_nn27">Dealing With Ambiguous Grammars</a> 55<li><a href="#ply_nn28">The parser.out file</a> 56<li><a href="#ply_nn29">Syntax Error Handling</a> 57<ul> 58<li><a href="#ply_nn30">Recovery and resynchronization with error rules</a> 59<li><a href="#ply_nn31">Panic mode recovery</a> 60<li><a href="#ply_nn35">Signaling an error from a production</a> 61<li><a href="#ply_nn32">General comments on error handling</a> 62</ul> 63<li><a href="#ply_nn33">Line Number and Position Tracking</a> 64<li><a href="#ply_nn34">AST Construction</a> 65<li><a href="#ply_nn35">Embedded Actions</a> 66<li><a href="#ply_nn36">Miscellaneous Yacc Notes</a> 67</ul> 68<li><a href="#ply_nn37">Multiple Parsers and Lexers</a> 69<li><a href="#ply_nn38">Using Python's Optimized Mode</a> 70<li><a href="#ply_nn44">Advanced Debugging</a> 71<ul> 72<li><a href="#ply_nn45">Debugging the lex() and yacc() commands</a> 73<li><a href="#ply_nn46">Run-time Debugging</a> 74</ul> 75<li><a href="#ply_nn39">Where to go from here?</a> 76</ul> 77</div> 78<!-- INDEX --> 79 80 81 82<H2><a name="ply_nn1"></a>1. Preface and Requirements</H2> 83 84 85<p> 86This document provides an overview of lexing and parsing with PLY. 87Given the intrinsic complexity of parsing, I would strongly advise 88that you read (or at least skim) this entire document before jumping 89into a big development project with PLY. 90</p> 91 92<p> 93PLY-3.0 is compatible with both Python 2 and Python 3. Be aware that 94Python 3 support is new and has not been extensively tested (although 95all of the examples and unit tests pass under Python 3.0). If you are 96using Python 2, you should try to use Python 2.4 or newer. Although PLY 97works with versions as far back as Python 2.2, some of its optional features 98require more modern library modules. 99</p> 100 101<H2><a name="ply_nn1"></a>2. Introduction</H2> 102 103 104PLY is a pure-Python implementation of the popular compiler 105construction tools lex and yacc. The main goal of PLY is to stay 106fairly faithful to the way in which traditional lex/yacc tools work. 107This includes supporting LALR(1) parsing as well as providing 108extensive input validation, error reporting, and diagnostics. Thus, 109if you've used yacc in another programming language, it should be 110relatively straightforward to use PLY. 111 112<p> 113Early versions of PLY were developed to support an Introduction to 114Compilers Course I taught in 2001 at the University of Chicago. In this course, 115students built a fully functional compiler for a simple Pascal-like 116language. Their compiler, implemented entirely in Python, had to 117include lexical analysis, parsing, type checking, type inference, 118nested scoping, and code generation for the SPARC processor. 119Approximately 30 different compiler implementations were completed in 120this course. Most of PLY's interface and operation has been influenced by common 121usability problems encountered by students. Since 2001, PLY has 122continued to be improved as feedback has been received from users. 123PLY-3.0 represents a major refactoring of the original implementation 124with an eye towards future enhancements. 125 126<p> 127Since PLY was primarily developed as an instructional tool, you will 128find it to be fairly picky about token and grammar rule 129specification. In part, this 130added formality is meant to catch common programming mistakes made by 131novice users. However, advanced users will also find such features to 132be useful when building complicated grammars for real programming 133languages. It should also be noted that PLY does not provide much in 134the way of bells and whistles (e.g., automatic construction of 135abstract syntax trees, tree traversal, etc.). Nor would I consider it 136to be a parsing framework. Instead, you will find a bare-bones, yet 137fully capable lex/yacc implementation written entirely in Python. 138 139<p> 140The rest of this document assumes that you are somewhat familar with 141parsing theory, syntax directed translation, and the use of compiler 142construction tools such as lex and yacc in other programming 143languages. If you are unfamilar with these topics, you will probably 144want to consult an introductory text such as "Compilers: Principles, 145Techniques, and Tools", by Aho, Sethi, and Ullman. O'Reilly's "Lex 146and Yacc" by John Levine may also be handy. In fact, the O'Reilly book can be 147used as a reference for PLY as the concepts are virtually identical. 148 149<H2><a name="ply_nn2"></a>3. PLY Overview</H2> 150 151 152PLY consists of two separate modules; <tt>lex.py</tt> and 153<tt>yacc.py</tt>, both of which are found in a Python package 154called <tt>ply</tt>. The <tt>lex.py</tt> module is used to break input text into a 155collection of tokens specified by a collection of regular expression 156rules. <tt>yacc.py</tt> is used to recognize language syntax that has 157been specified in the form of a context free grammar. <tt>yacc.py</tt> uses LR parsing and generates its parsing tables 158using either the LALR(1) (the default) or SLR table generation algorithms. 159 160<p> 161The two tools are meant to work together. Specifically, 162<tt>lex.py</tt> provides an external interface in the form of a 163<tt>token()</tt> function that returns the next valid token on the 164input stream. <tt>yacc.py</tt> calls this repeatedly to retrieve 165tokens and invoke grammar rules. The output of <tt>yacc.py</tt> is 166often an Abstract Syntax Tree (AST). However, this is entirely up to 167the user. If desired, <tt>yacc.py</tt> can also be used to implement 168simple one-pass compilers. 169 170<p> 171Like its Unix counterpart, <tt>yacc.py</tt> provides most of the 172features you expect including extensive error checking, grammar 173validation, support for empty productions, error tokens, and ambiguity 174resolution via precedence rules. In fact, everything that is possible in traditional yacc 175should be supported in PLY. 176 177<p> 178The primary difference between 179<tt>yacc.py</tt> and Unix <tt>yacc</tt> is that <tt>yacc.py</tt> 180doesn't involve a separate code-generation process. 181Instead, PLY relies on reflection (introspection) 182to build its lexers and parsers. Unlike traditional lex/yacc which 183require a special input file that is converted into a separate source 184file, the specifications given to PLY <em>are</em> valid Python 185programs. This means that there are no extra source files nor is 186there a special compiler construction step (e.g., running yacc to 187generate Python code for the compiler). Since the generation of the 188parsing tables is relatively expensive, PLY caches the results and 189saves them to a file. If no changes are detected in the input source, 190the tables are read from the cache. Otherwise, they are regenerated. 191 192<H2><a name="ply_nn3"></a>4. Lex</H2> 193 194 195<tt>lex.py</tt> is used to tokenize an input string. For example, suppose 196you're writing a programming language and a user supplied the following input string: 197 198<blockquote> 199<pre> 200x = 3 + 42 * (s - t) 201</pre> 202</blockquote> 203 204A tokenizer splits the string into individual tokens 205 206<blockquote> 207<pre> 208'x','=', '3', '+', '42', '*', '(', 's', '-', 't', ')' 209</pre> 210</blockquote> 211 212Tokens are usually given names to indicate what they are. For example: 213 214<blockquote> 215<pre> 216'ID','EQUALS','NUMBER','PLUS','NUMBER','TIMES', 217'LPAREN','ID','MINUS','ID','RPAREN' 218</pre> 219</blockquote> 220 221More specifically, the input is broken into pairs of token types and values. For example: 222 223<blockquote> 224<pre> 225('ID','x'), ('EQUALS','='), ('NUMBER','3'), 226('PLUS','+'), ('NUMBER','42), ('TIMES','*'), 227('LPAREN','('), ('ID','s'), ('MINUS','-'), 228('ID','t'), ('RPAREN',')' 229</pre> 230</blockquote> 231 232The identification of tokens is typically done by writing a series of regular expression 233rules. The next section shows how this is done using <tt>lex.py</tt>. 234 235<H3><a name="ply_nn4"></a>4.1 Lex Example</H3> 236 237 238The following example shows how <tt>lex.py</tt> is used to write a simple tokenizer. 239 240<blockquote> 241<pre> 242# ------------------------------------------------------------ 243# calclex.py 244# 245# tokenizer for a simple expression evaluator for 246# numbers and +,-,*,/ 247# ------------------------------------------------------------ 248import ply.lex as lex 249 250# List of token names. This is always required 251tokens = ( 252 'NUMBER', 253 'PLUS', 254 'MINUS', 255 'TIMES', 256 'DIVIDE', 257 'LPAREN', 258 'RPAREN', 259) 260 261# Regular expression rules for simple tokens 262t_PLUS = r'\+' 263t_MINUS = r'-' 264t_TIMES = r'\*' 265t_DIVIDE = r'/' 266t_LPAREN = r'\(' 267t_RPAREN = r'\)' 268 269# A regular expression rule with some action code 270def t_NUMBER(t): 271 r'\d+' 272 t.value = int(t.value) 273 return t 274 275# Define a rule so we can track line numbers 276def t_newline(t): 277 r'\n+' 278 t.lexer.lineno += len(t.value) 279 280# A string containing ignored characters (spaces and tabs) 281t_ignore = ' \t' 282 283# Error handling rule 284def t_error(t): 285 print "Illegal character '%s'" % t.value[0] 286 t.lexer.skip(1) 287 288# Build the lexer 289lexer = lex.lex() 290 291</pre> 292</blockquote> 293To use the lexer, you first need to feed it some input text using 294its <tt>input()</tt> method. After that, repeated calls 295to <tt>token()</tt> produce tokens. The following code shows how this 296works: 297 298<blockquote> 299<pre> 300 301# Test it out 302data = ''' 3033 + 4 * 10 304 + -20 *2 305''' 306 307# Give the lexer some input 308lexer.input(data) 309 310# Tokenize 311while True: 312 tok = lexer.token() 313 if not tok: break # No more input 314 print tok 315</pre> 316</blockquote> 317 318When executed, the example will produce the following output: 319 320<blockquote> 321<pre> 322$ python example.py 323LexToken(NUMBER,3,2,1) 324LexToken(PLUS,'+',2,3) 325LexToken(NUMBER,4,2,5) 326LexToken(TIMES,'*',2,7) 327LexToken(NUMBER,10,2,10) 328LexToken(PLUS,'+',3,14) 329LexToken(MINUS,'-',3,16) 330LexToken(NUMBER,20,3,18) 331LexToken(TIMES,'*',3,20) 332LexToken(NUMBER,2,3,21) 333</pre> 334</blockquote> 335 336Lexers also support the iteration protocol. So, you can write the above loop as follows: 337 338<blockquote> 339<pre> 340for tok in lexer: 341 print tok 342</pre> 343</blockquote> 344 345The tokens returned by <tt>lexer.token()</tt> are instances 346of <tt>LexToken</tt>. This object has 347attributes <tt>tok.type</tt>, <tt>tok.value</tt>, 348<tt>tok.lineno</tt>, and <tt>tok.lexpos</tt>. The following code shows an example of 349accessing these attributes: 350 351<blockquote> 352<pre> 353# Tokenize 354while True: 355 tok = lexer.token() 356 if not tok: break # No more input 357 print tok.type, tok.value, tok.line, tok.lexpos 358</pre> 359</blockquote> 360 361The <tt>tok.type</tt> and <tt>tok.value</tt> attributes contain the 362type and value of the token itself. 363<tt>tok.line</tt> and <tt>tok.lexpos</tt> contain information about 364the location of the token. <tt>tok.lexpos</tt> is the index of the 365token relative to the start of the input text. 366 367<H3><a name="ply_nn5"></a>4.2 The tokens list</H3> 368 369 370All lexers must provide a list <tt>tokens</tt> that defines all of the possible token 371names that can be produced by the lexer. This list is always required 372and is used to perform a variety of validation checks. The tokens list is also used by the 373<tt>yacc.py</tt> module to identify terminals. 374 375<p> 376In the example, the following code specified the token names: 377 378<blockquote> 379<pre> 380tokens = ( 381 'NUMBER', 382 'PLUS', 383 'MINUS', 384 'TIMES', 385 'DIVIDE', 386 'LPAREN', 387 'RPAREN', 388) 389</pre> 390</blockquote> 391 392<H3><a name="ply_nn6"></a>4.3 Specification of tokens</H3> 393 394 395Each token is specified by writing a regular expression rule. Each of these rules are 396are defined by making declarations with a special prefix <tt>t_</tt> to indicate that it 397defines a token. For simple tokens, the regular expression can 398be specified as strings such as this (note: Python raw strings are used since they are the 399most convenient way to write regular expression strings): 400 401<blockquote> 402<pre> 403t_PLUS = r'\+' 404</pre> 405</blockquote> 406 407In this case, the name following the <tt>t_</tt> must exactly match one of the 408names supplied in <tt>tokens</tt>. If some kind of action needs to be performed, 409a token rule can be specified as a function. For example, this rule matches numbers and 410converts the string into a Python integer. 411 412<blockquote> 413<pre> 414def t_NUMBER(t): 415 r'\d+' 416 t.value = int(t.value) 417 return t 418</pre> 419</blockquote> 420 421When a function is used, the regular expression rule is specified in the function documentation string. 422The function always takes a single argument which is an instance of 423<tt>LexToken</tt>. This object has attributes of <tt>t.type</tt> which is the token type (as a string), 424<tt>t.value</tt> which is the lexeme (the actual text matched), <tt>t.lineno</tt> which is the current line number, and <tt>t.lexpos</tt> which 425is the position of the token relative to the beginning of the input text. 426By default, <tt>t.type</tt> is set to the name following the <tt>t_</tt> prefix. The action 427function can modify the contents of the <tt>LexToken</tt> object as appropriate. However, 428when it is done, the resulting token should be returned. If no value is returned by the action 429function, the token is simply discarded and the next token read. 430 431<p> 432Internally, <tt>lex.py</tt> uses the <tt>re</tt> module to do its patten matching. When building the master regular expression, 433rules are added in the following order: 434<p> 435<ol> 436<li>All tokens defined by functions are added in the same order as they appear in the lexer file. 437<li>Tokens defined by strings are added next by sorting them in order of decreasing regular expression length (longer expressions 438are added first). 439</ol> 440<p> 441Without this ordering, it can be difficult to correctly match certain types of tokens. For example, if you 442wanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first. By sorting regular 443expressions in order of decreasing length, this problem is solved for rules defined as strings. For functions, 444the order can be explicitly controlled since rules appearing first are checked first. 445 446<p> 447To handle reserved words, you should write a single rule to match an 448identifier and do a special name lookup in a function like this: 449 450<blockquote> 451<pre> 452reserved = { 453 'if' : 'IF', 454 'then' : 'THEN', 455 'else' : 'ELSE', 456 'while' : 'WHILE', 457 ... 458} 459 460tokens = ['LPAREN','RPAREN',...,'ID'] + list(reserved.values()) 461 462def t_ID(t): 463 r'[a-zA-Z_][a-zA-Z_0-9]*' 464 t.type = reserved.get(t.value,'ID') # Check for reserved words 465 return t 466</pre> 467</blockquote> 468 469This approach greatly reduces the number of regular expression rules and is likely to make things a little faster. 470 471<p> 472<b>Note:</b> You should avoid writing individual rules for reserved words. For example, if you write rules like this, 473 474<blockquote> 475<pre> 476t_FOR = r'for' 477t_PRINT = r'print' 478</pre> 479</blockquote> 480 481those rules will be triggered for identifiers that include those words as a prefix such as "forget" or "printed". This is probably not 482what you want. 483 484<H3><a name="ply_nn7"></a>4.4 Token values</H3> 485 486 487When tokens are returned by lex, they have a value that is stored in the <tt>value</tt> attribute. Normally, the value is the text 488that was matched. However, the value can be assigned to any Python object. For instance, when lexing identifiers, you may 489want to return both the identifier name and information from some sort of symbol table. To do this, you might write a rule like this: 490 491<blockquote> 492<pre> 493def t_ID(t): 494 ... 495 # Look up symbol table information and return a tuple 496 t.value = (t.value, symbol_lookup(t.value)) 497 ... 498 return t 499</pre> 500</blockquote> 501 502It is important to note that storing data in other attribute names is <em>not</em> recommended. The <tt>yacc.py</tt> module only exposes the 503contents of the <tt>value</tt> attribute. Thus, accessing other attributes may be unnecessarily awkward. If you 504need to store multiple values on a token, assign a tuple, dictionary, or instance to <tt>value</tt>. 505 506<H3><a name="ply_nn8"></a>4.5 Discarded tokens</H3> 507 508 509To discard a token, such as a comment, simply define a token rule that returns no value. For example: 510 511<blockquote> 512<pre> 513def t_COMMENT(t): 514 r'\#.*' 515 pass 516 # No return value. Token discarded 517</pre> 518</blockquote> 519 520Alternatively, you can include the prefix "ignore_" in the token declaration to force a token to be ignored. For example: 521 522<blockquote> 523<pre> 524t_ignore_COMMENT = r'\#.*' 525</pre> 526</blockquote> 527 528Be advised that if you are ignoring many different kinds of text, you may still want to use functions since these provide more precise 529control over the order in which regular expressions are matched (i.e., functions are matched in order of specification whereas strings are 530sorted by regular expression length). 531 532<H3><a name="ply_nn9"></a>4.6 Line numbers and positional information</H3> 533 534 535<p>By default, <tt>lex.py</tt> knows nothing about line numbers. This is because <tt>lex.py</tt> doesn't know anything 536about what constitutes a "line" of input (e.g., the newline character or even if the input is textual data). 537To update this information, you need to write a special rule. In the example, the <tt>t_newline()</tt> rule shows how to do this. 538 539<blockquote> 540<pre> 541# Define a rule so we can track line numbers 542def t_newline(t): 543 r'\n+' 544 t.lexer.lineno += len(t.value) 545</pre> 546</blockquote> 547Within the rule, the <tt>lineno</tt> attribute of the underlying lexer <tt>t.lexer</tt> is updated. 548After the line number is updated, the token is simply discarded since nothing is returned. 549 550<p> 551<tt>lex.py</tt> does not perform and kind of automatic column tracking. However, it does record positional 552information related to each token in the <tt>lexpos</tt> attribute. Using this, it is usually possible to compute 553column information as a separate step. For instance, just count backwards until you reach a newline. 554 555<blockquote> 556<pre> 557# Compute column. 558# input is the input text string 559# token is a token instance 560def find_column(input,token): 561 last_cr = input.rfind('\n',0,token.lexpos) 562 if last_cr < 0: 563 last_cr = 0 564 column = (token.lexpos - last_cr) + 1 565 return column 566</pre> 567</blockquote> 568 569Since column information is often only useful in the context of error handling, calculating the column 570position can be performed when needed as opposed to doing it for each token. 571 572<H3><a name="ply_nn10"></a>4.7 Ignored characters</H3> 573 574 575<p> 576The special <tt>t_ignore</tt> rule is reserved by <tt>lex.py</tt> for characters 577that should be completely ignored in the input stream. 578Usually this is used to skip over whitespace and other non-essential characters. 579Although it is possible to define a regular expression rule for whitespace in a manner 580similar to <tt>t_newline()</tt>, the use of <tt>t_ignore</tt> provides substantially better 581lexing performance because it is handled as a special case and is checked in a much 582more efficient manner than the normal regular expression rules. 583 584<H3><a name="ply_nn11"></a>4.8 Literal characters</H3> 585 586 587<p> 588Literal characters can be specified by defining a variable <tt>literals</tt> in your lexing module. For example: 589 590<blockquote> 591<pre> 592literals = [ '+','-','*','/' ] 593</pre> 594</blockquote> 595 596or alternatively 597 598<blockquote> 599<pre> 600literals = "+-*/" 601</pre> 602</blockquote> 603 604A literal character is simply a single character that is returned "as is" when encountered by the lexer. Literals are checked 605after all of the defined regular expression rules. Thus, if a rule starts with one of the literal characters, it will always 606take precedence. 607<p> 608When a literal token is returned, both its <tt>type</tt> and <tt>value</tt> attributes are set to the character itself. For example, <tt>'+'</tt>. 609 610<H3><a name="ply_nn12"></a>4.9 Error handling</H3> 611 612 613<p> 614Finally, the <tt>t_error()</tt> 615function is used to handle lexing errors that occur when illegal 616characters are detected. In this case, the <tt>t.value</tt> attribute contains the 617rest of the input string that has not been tokenized. In the example, the error function 618was defined as follows: 619 620<blockquote> 621<pre> 622# Error handling rule 623def t_error(t): 624 print "Illegal character '%s'" % t.value[0] 625 t.lexer.skip(1) 626</pre> 627</blockquote> 628 629In this case, we simply print the offending character and skip ahead one character by calling <tt>t.lexer.skip(1)</tt>. 630 631<H3><a name="ply_nn13"></a>4.10 Building and using the lexer</H3> 632 633 634<p> 635To build the lexer, the function <tt>lex.lex()</tt> is used. This function 636uses Python reflection (or introspection) to read the the regular expression rules 637out of the calling context and build the lexer. Once the lexer has been built, two methods can 638be used to control the lexer. 639 640<ul> 641<li><tt>lexer.input(data)</tt>. Reset the lexer and store a new input string. 642<li><tt>lexer.token()</tt>. Return the next token. Returns a special <tt>LexToken</tt> instance on success or 643None if the end of the input text has been reached. 644</ul> 645 646The preferred way to use PLY is to invoke the above methods directly on the lexer object returned by the 647<tt>lex()</tt> function. The legacy interface to PLY involves module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt>. 648For example: 649 650<blockquote> 651<pre> 652lex.lex() 653lex.input(sometext) 654while 1: 655 tok = lex.token() 656 if not tok: break 657 print tok 658</pre> 659</blockquote> 660 661<p> 662In this example, the module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt> are bound to the <tt>input()</tt> 663and <tt>token()</tt> methods of the last lexer created by the lex module. This interface may go away at some point so 664it's probably best not to use it. 665 666<H3><a name="ply_nn14"></a>4.11 The @TOKEN decorator</H3> 667 668 669In some applications, you may want to define build tokens from as a series of 670more complex regular expression rules. For example: 671 672<blockquote> 673<pre> 674digit = r'([0-9])' 675nondigit = r'([_A-Za-z])' 676identifier = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)' 677 678def t_ID(t): 679 # want docstring to be identifier above. ????? 680 ... 681</pre> 682</blockquote> 683 684In this case, we want the regular expression rule for <tt>ID</tt> to be one of the variables above. However, there is no 685way to directly specify this using a normal documentation string. To solve this problem, you can use the <tt>@TOKEN</tt> 686decorator. For example: 687 688<blockquote> 689<pre> 690from ply.lex import TOKEN 691 692@TOKEN(identifier) 693def t_ID(t): 694 ... 695</pre> 696</blockquote> 697 698This will attach <tt>identifier</tt> to the docstring for <tt>t_ID()</tt> allowing <tt>lex.py</tt> to work normally. An alternative 699approach this problem is to set the docstring directly like this: 700 701<blockquote> 702<pre> 703def t_ID(t): 704 ... 705 706t_ID.__doc__ = identifier 707</pre> 708</blockquote> 709 710<b>NOTE:</b> Use of <tt>@TOKEN</tt> requires Python-2.4 or newer. If you're concerned about backwards compatibility with older 711versions of Python, use the alternative approach of setting the docstring directly. 712 713<H3><a name="ply_nn15"></a>4.12 Optimized mode</H3> 714 715 716For improved performance, it may be desirable to use Python's 717optimized mode (e.g., running Python with the <tt>-O</tt> 718option). However, doing so causes Python to ignore documentation 719strings. This presents special problems for <tt>lex.py</tt>. To 720handle this case, you can create your lexer using 721the <tt>optimize</tt> option as follows: 722 723<blockquote> 724<pre> 725lexer = lex.lex(optimize=1) 726</pre> 727</blockquote> 728 729Next, run Python in its normal operating mode. When you do 730this, <tt>lex.py</tt> will write a file called <tt>lextab.py</tt> to 731the current directory. This file contains all of the regular 732expression rules and tables used during lexing. On subsequent 733executions, 734<tt>lextab.py</tt> will simply be imported to build the lexer. This 735approach substantially improves the startup time of the lexer and it 736works in Python's optimized mode. 737 738<p> 739To change the name of the lexer-generated file, use the <tt>lextab</tt> keyword argument. For example: 740 741<blockquote> 742<pre> 743lexer = lex.lex(optimize=1,lextab="footab") 744</pre> 745</blockquote> 746 747When running in optimized mode, it is important to note that lex disables most error checking. Thus, this is really only recommended 748if you're sure everything is working correctly and you're ready to start releasing production code. 749 750<H3><a name="ply_nn16"></a>4.13 Debugging</H3> 751 752 753For the purpose of debugging, you can run <tt>lex()</tt> in a debugging mode as follows: 754 755<blockquote> 756<pre> 757lexer = lex.lex(debug=1) 758</pre> 759</blockquote> 760 761<p> 762This will produce various sorts of debugging information including all of the added rules, 763the master regular expressions used by the lexer, and tokens generating during lexing. 764</p> 765 766<p> 767In addition, <tt>lex.py</tt> comes with a simple main function which 768will either tokenize input read from standard input or from a file specified 769on the command line. To use it, simply put this in your lexer: 770</p> 771 772<blockquote> 773<pre> 774if __name__ == '__main__': 775 lex.runmain() 776</pre> 777</blockquote> 778 779Please refer to the "Debugging" section near the end for some more advanced details 780of debugging. 781 782<H3><a name="ply_nn17"></a>4.14 Alternative specification of lexers</H3> 783 784 785As shown in the example, lexers are specified all within one Python module. If you want to 786put token rules in a different module from the one in which you invoke <tt>lex()</tt>, use the 787<tt>module</tt> keyword argument. 788 789<p> 790For example, you might have a dedicated module that just contains 791the token rules: 792 793<blockquote> 794<pre> 795# module: tokrules.py 796# This module just contains the lexing rules 797 798# List of token names. This is always required 799tokens = ( 800 'NUMBER', 801 'PLUS', 802 'MINUS', 803 'TIMES', 804 'DIVIDE', 805 'LPAREN', 806 'RPAREN', 807) 808 809# Regular expression rules for simple tokens 810t_PLUS = r'\+' 811t_MINUS = r'-' 812t_TIMES = r'\*' 813t_DIVIDE = r'/' 814t_LPAREN = r'\(' 815t_RPAREN = r'\)' 816 817# A regular expression rule with some action code 818def t_NUMBER(t): 819 r'\d+' 820 t.value = int(t.value) 821 return t 822 823# Define a rule so we can track line numbers 824def t_newline(t): 825 r'\n+' 826 t.lexer.lineno += len(t.value) 827 828# A string containing ignored characters (spaces and tabs) 829t_ignore = ' \t' 830 831# Error handling rule 832def t_error(t): 833 print "Illegal character '%s'" % t.value[0] 834 t.lexer.skip(1) 835</pre> 836</blockquote> 837 838Now, if you wanted to build a tokenizer from these rules from within a different module, you would do the following (shown for Python interactive mode): 839 840<blockquote> 841<pre> 842>>> import tokrules 843>>> <b>lexer = lex.lex(module=tokrules)</b> 844>>> lexer.input("3 + 4") 845>>> lexer.token() 846LexToken(NUMBER,3,1,1,0) 847>>> lexer.token() 848LexToken(PLUS,'+',1,2) 849>>> lexer.token() 850LexToken(NUMBER,4,1,4) 851>>> lexer.token() 852None 853>>> 854</pre> 855</blockquote> 856 857The <tt>module</tt> option can also be used to define lexers from instances of a class. For example: 858 859<blockquote> 860<pre> 861import ply.lex as lex 862 863class MyLexer: 864 # List of token names. This is always required 865 tokens = ( 866 'NUMBER', 867 'PLUS', 868 'MINUS', 869 'TIMES', 870 'DIVIDE', 871 'LPAREN', 872 'RPAREN', 873 ) 874 875 # Regular expression rules for simple tokens 876 t_PLUS = r'\+' 877 t_MINUS = r'-' 878 t_TIMES = r'\*' 879 t_DIVIDE = r'/' 880 t_LPAREN = r'\(' 881 t_RPAREN = r'\)' 882 883 # A regular expression rule with some action code 884 # Note addition of self parameter since we're in a class 885 def t_NUMBER(self,t): 886 r'\d+' 887 t.value = int(t.value) 888 return t 889 890 # Define a rule so we can track line numbers 891 def t_newline(self,t): 892 r'\n+' 893 t.lexer.lineno += len(t.value) 894 895 # A string containing ignored characters (spaces and tabs) 896 t_ignore = ' \t' 897 898 # Error handling rule 899 def t_error(self,t): 900 print "Illegal character '%s'" % t.value[0] 901 t.lexer.skip(1) 902 903 <b># Build the lexer 904 def build(self,**kwargs): 905 self.lexer = lex.lex(module=self, **kwargs)</b> 906 907 # Test it output 908 def test(self,data): 909 self.lexer.input(data) 910 while True: 911 tok = lexer.token() 912 if not tok: break 913 print tok 914 915# Build the lexer and try it out 916m = MyLexer() 917m.build() # Build the lexer 918m.test("3 + 4") # Test it 919</pre> 920</blockquote> 921 922 923When building a lexer from class, <em>you should construct the lexer from 924an instance of the class</em>, not the class object itself. This is because 925PLY only works properly if the lexer actions are defined by bound-methods. 926 927<p> 928When using the <tt>module</tt> option to <tt>lex()</tt>, PLY collects symbols 929from the underlying object using the <tt>dir()</tt> function. There is no 930direct access to the <tt>__dict__</tt> attribute of the object supplied as a 931module value. 932 933<P> 934Finally, if you want to keep things nicely encapsulated, but don't want to use a 935full-fledged class definition, lexers can be defined using closures. For example: 936 937<blockquote> 938<pre> 939import ply.lex as lex 940 941# List of token names. This is always required 942tokens = ( 943 'NUMBER', 944 'PLUS', 945 'MINUS', 946 'TIMES', 947 'DIVIDE', 948 'LPAREN', 949 'RPAREN', 950) 951 952def MyLexer(): 953 # Regular expression rules for simple tokens 954 t_PLUS = r'\+' 955 t_MINUS = r'-' 956 t_TIMES = r'\*' 957 t_DIVIDE = r'/' 958 t_LPAREN = r'\(' 959 t_RPAREN = r'\)' 960 961 # A regular expression rule with some action code 962 def t_NUMBER(t): 963 r'\d+' 964 t.value = int(t.value) 965 return t 966 967 # Define a rule so we can track line numbers 968 def t_newline(t): 969 r'\n+' 970 t.lexer.lineno += len(t.value) 971 972 # A string containing ignored characters (spaces and tabs) 973 t_ignore = ' \t' 974 975 # Error handling rule 976 def t_error(t): 977 print "Illegal character '%s'" % t.value[0] 978 t.lexer.skip(1) 979 980 # Build the lexer from my environment and return it 981 return lex.lex() 982</pre> 983</blockquote> 984 985 986<H3><a name="ply_nn18"></a>4.15 Maintaining state</H3> 987 988 989In your lexer, you may want to maintain a variety of state 990information. This might include mode settings, symbol tables, and 991other details. As an example, suppose that you wanted to keep 992track of how many NUMBER tokens had been encountered. 993 994<p> 995One way to do this is to keep a set of global variables in the module 996where you created the lexer. For example: 997 998<blockquote> 999<pre> 1000num_count = 0 1001def t_NUMBER(t): 1002 r'\d+' 1003 global num_count 1004 num_count += 1 1005 t.value = int(t.value) 1006 return t 1007</pre> 1008</blockquote> 1009 1010If you don't like the use of a global variable, another place to store 1011information is inside the Lexer object created by <tt>lex()</tt>. 1012To this, you can use the <tt>lexer</tt> attribute of tokens passed to 1013the various rules. For example: 1014 1015<blockquote> 1016<pre> 1017def t_NUMBER(t): 1018 r'\d+' 1019 t.lexer.num_count += 1 # Note use of lexer attribute 1020 t.value = int(t.value) 1021 return t 1022 1023lexer = lex.lex() 1024lexer.num_count = 0 # Set the initial count 1025</pre> 1026</blockquote> 1027 1028This latter approach has the advantage of being simple and working 1029correctly in applications where multiple instantiations of a given 1030lexer exist in the same application. However, this might also feel 1031like a gross violation of encapsulation to OO purists. 1032Just to put your mind at some ease, all 1033internal attributes of the lexer (with the exception of <tt>lineno</tt>) have names that are prefixed 1034by <tt>lex</tt> (e.g., <tt>lexdata</tt>,<tt>lexpos</tt>, etc.). Thus, 1035it is perfectly safe to store attributes in the lexer that 1036don't have names starting with that prefix or a name that conlicts with one of the 1037predefined methods (e.g., <tt>input()</tt>, <tt>token()</tt>, etc.). 1038 1039<p> 1040If you don't like assigning values on the lexer object, you can define your lexer as a class as 1041shown in the previous section: 1042 1043<blockquote> 1044<pre> 1045class MyLexer: 1046 ... 1047 def t_NUMBER(self,t): 1048 r'\d+' 1049 self.num_count += 1 1050 t.value = int(t.value) 1051 return t 1052 1053 def build(self, **kwargs): 1054 self.lexer = lex.lex(object=self,**kwargs) 1055 1056 def __init__(self): 1057 self.num_count = 0 1058</pre> 1059</blockquote> 1060 1061The class approach may be the easiest to manage if your application is 1062going to be creating multiple instances of the same lexer and you need 1063to manage a lot of state. 1064 1065<p> 1066State can also be managed through closures. For example, in Python 3: 1067 1068<blockquote> 1069<pre> 1070def MyLexer(): 1071 num_count = 0 1072 ... 1073 def t_NUMBER(t): 1074 r'\d+' 1075 nonlocal num_count 1076 num_count += 1 1077 t.value = int(t.value) 1078 return t 1079 ... 1080</pre> 1081</blockquote> 1082 1083<H3><a name="ply_nn19"></a>4.16 Lexer cloning</H3> 1084 1085 1086<p> 1087If necessary, a lexer object can be duplicated by invoking its <tt>clone()</tt> method. For example: 1088 1089<blockquote> 1090<pre> 1091lexer = lex.lex() 1092... 1093newlexer = lexer.clone() 1094</pre> 1095</blockquote> 1096 1097When a lexer is cloned, the copy is exactly identical to the original lexer 1098including any input text and internal state. However, the clone allows a 1099different set of input text to be supplied which may be processed separately. 1100This may be useful in situations when you are writing a parser/compiler that 1101involves recursive or reentrant processing. For instance, if you 1102needed to scan ahead in the input for some reason, you could create a 1103clone and use it to look ahead. Or, if you were implementing some kind of preprocessor, 1104cloned lexers could be used to handle different input files. 1105 1106<p> 1107Creating a clone is different than calling <tt>lex.lex()</tt> in that 1108PLY doesn't regenerate any of the internal tables or regular expressions. So, 1109 1110<p> 1111Special considerations need to be made when cloning lexers that also 1112maintain their own internal state using classes or closures. Namely, 1113you need to be aware that the newly created lexers will share all of 1114this state with the original lexer. For example, if you defined a 1115lexer as a class and did this: 1116 1117<blockquote> 1118<pre> 1119m = MyLexer() 1120a = lex.lex(object=m) # Create a lexer 1121 1122b = a.clone() # Clone the lexer 1123</pre> 1124</blockquote> 1125 1126Then both <tt>a</tt> and <tt>b</tt> are going to be bound to the same 1127object <tt>m</tt> and any changes to <tt>m</tt> will be reflected in both lexers. It's 1128important to emphasize that <tt>clone()</tt> is only meant to create a new lexer 1129that reuses the regular expressions and environment of another lexer. If you 1130need to make a totally new copy of a lexer, then call <tt>lex()</tt> again. 1131 1132<H3><a name="ply_nn20"></a>4.17 Internal lexer state</H3> 1133 1134 1135A Lexer object <tt>lexer</tt> has a number of internal attributes that may be useful in certain 1136situations. 1137 1138<p> 1139<tt>lexer.lexpos</tt> 1140<blockquote> 1141This attribute is an integer that contains the current position within the input text. If you modify 1142the value, it will change the result of the next call to <tt>token()</tt>. Within token rule functions, this points 1143to the first character <em>after</em> the matched text. If the value is modified within a rule, the next returned token will be 1144matched at the new position. 1145</blockquote> 1146 1147<p> 1148<tt>lexer.lineno</tt> 1149<blockquote> 1150The current value of the line number attribute stored in the lexer. PLY only specifies that the attribute 1151exists---it never sets, updates, or performs any processing with it. If you want to track line numbers, 1152you will need to add code yourself (see the section on line numbers and positional information). 1153</blockquote> 1154 1155<p> 1156<tt>lexer.lexdata</tt> 1157<blockquote> 1158The current input text stored in the lexer. This is the string passed with the <tt>input()</tt> method. It 1159would probably be a bad idea to modify this unless you really know what you're doing. 1160</blockquote> 1161 1162<P> 1163<tt>lexer.lexmatch</tt> 1164<blockquote> 1165This is the raw <tt>Match</tt> object returned by the Python <tt>re.match()</tt> function (used internally by PLY) for the 1166current token. If you have written a regular expression that contains named groups, you can use this to retrieve those values. 1167Note: This attribute is only updated when tokens are defined and processed by functions. 1168</blockquote> 1169 1170<H3><a name="ply_nn21"></a>4.18 Conditional lexing and start conditions</H3> 1171 1172 1173In advanced parsing applications, it may be useful to have different 1174lexing states. For instance, you may want the occurrence of a certain 1175token or syntactic construct to trigger a different kind of lexing. 1176PLY supports a feature that allows the underlying lexer to be put into 1177a series of different states. Each state can have its own tokens, 1178lexing rules, and so forth. The implementation is based largely on 1179the "start condition" feature of GNU flex. Details of this can be found 1180at <a 1181href="http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html">http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html.</a>. 1182 1183<p> 1184To define a new lexing state, it must first be declared. This is done by including a "states" declaration in your 1185lex file. For example: 1186 1187<blockquote> 1188<pre> 1189states = ( 1190 ('foo','exclusive'), 1191 ('bar','inclusive'), 1192) 1193</pre> 1194</blockquote> 1195 1196This declaration declares two states, <tt>'foo'</tt> 1197and <tt>'bar'</tt>. States may be of two types; <tt>'exclusive'</tt> 1198and <tt>'inclusive'</tt>. An exclusive state completely overrides the 1199default behavior of the lexer. That is, lex will only return tokens 1200and apply rules defined specifically for that state. An inclusive 1201state adds additional tokens and rules to the default set of rules. 1202Thus, lex will return both the tokens defined by default in addition 1203to those defined for the inclusive state. 1204 1205<p> 1206Once a state has been declared, tokens and rules are declared by including the 1207state name in token/rule declaration. For example: 1208 1209<blockquote> 1210<pre> 1211t_foo_NUMBER = r'\d+' # Token 'NUMBER' in state 'foo' 1212t_bar_ID = r'[a-zA-Z_][a-zA-Z0-9_]*' # Token 'ID' in state 'bar' 1213 1214def t_foo_newline(t): 1215 r'\n' 1216 t.lexer.lineno += 1 1217</pre> 1218</blockquote> 1219 1220A token can be declared in multiple states by including multiple state names in the declaration. For example: 1221 1222<blockquote> 1223<pre> 1224t_foo_bar_NUMBER = r'\d+' # Defines token 'NUMBER' in both state 'foo' and 'bar' 1225</pre> 1226</blockquote> 1227 1228Alternative, a token can be declared in all states using the 'ANY' in the name. 1229 1230<blockquote> 1231<pre> 1232t_ANY_NUMBER = r'\d+' # Defines a token 'NUMBER' in all states 1233</pre> 1234</blockquote> 1235 1236If no state name is supplied, as is normally the case, the token is associated with a special state <tt>'INITIAL'</tt>. For example, 1237these two declarations are identical: 1238 1239<blockquote> 1240<pre> 1241t_NUMBER = r'\d+' 1242t_INITIAL_NUMBER = r'\d+' 1243</pre> 1244</blockquote> 1245 1246<p> 1247States are also associated with the special <tt>t_ignore</tt> and <tt>t_error()</tt> declarations. For example, if a state treats 1248these differently, you can declare: 1249 1250<blockquote> 1251<pre> 1252t_foo_ignore = " \t\n" # Ignored characters for state 'foo' 1253 1254def t_bar_error(t): # Special error handler for state 'bar' 1255 pass 1256</pre> 1257</blockquote> 1258 1259By default, lexing operates in the <tt>'INITIAL'</tt> state. This state includes all of the normally defined tokens. 1260For users who aren't using different states, this fact is completely transparent. If, during lexing or parsing, you want to change 1261the lexing state, use the <tt>begin()</tt> method. For example: 1262 1263<blockquote> 1264<pre> 1265def t_begin_foo(t): 1266 r'start_foo' 1267 t.lexer.begin('foo') # Starts 'foo' state 1268</pre> 1269</blockquote> 1270 1271To get out of a state, you use <tt>begin()</tt> to switch back to the initial state. For example: 1272 1273<blockquote> 1274<pre> 1275def t_foo_end(t): 1276 r'end_foo' 1277 t.lexer.begin('INITIAL') # Back to the initial state 1278</pre> 1279</blockquote> 1280 1281The management of states can also be done with a stack. For example: 1282 1283<blockquote> 1284<pre> 1285def t_begin_foo(t): 1286 r'start_foo' 1287 t.lexer.push_state('foo') # Starts 'foo' state 1288 1289def t_foo_end(t): 1290 r'end_foo' 1291 t.lexer.pop_state() # Back to the previous state 1292</pre> 1293</blockquote> 1294 1295<p> 1296The use of a stack would be useful in situations where there are many ways of entering a new lexing state and you merely want to go back 1297to the previous state afterwards. 1298 1299<P> 1300An example might help clarify. Suppose you were writing a parser and you wanted to grab sections of arbitrary C code enclosed by 1301curly braces. That is, whenever you encounter a starting brace '{', you want to read all of the enclosed code up to the ending brace '}' 1302and return it as a string. Doing this with a normal regular expression rule is nearly (if not actually) impossible. This is because braces can 1303be nested and can be included in comments and strings. Thus, simply matching up to the first matching '}' character isn't good enough. Here is how 1304you might use lexer states to do this: 1305 1306<blockquote> 1307<pre> 1308# Declare the state 1309states = ( 1310 ('ccode','exclusive'), 1311) 1312 1313# Match the first {. Enter ccode state. 1314def t_ccode(t): 1315 r'\{' 1316 t.lexer.code_start = t.lexer.lexpos # Record the starting position 1317 t.lexer.level = 1 # Initial brace level 1318 t.lexer.begin('ccode') # Enter 'ccode' state 1319 1320# Rules for the ccode state 1321def t_ccode_lbrace(t): 1322 r'\{' 1323 t.lexer.level +=1 1324 1325def t_ccode_rbrace(t): 1326 r'\}' 1327 t.lexer.level -=1 1328 1329 # If closing brace, return the code fragment 1330 if t.lexer.level == 0: 1331 t.value = t.lexer.lexdata[t.lexer.code_start:t.lexer.lexpos+1] 1332 t.type = "CCODE" 1333 t.lexer.lineno += t.value.count('\n') 1334 t.lexer.begin('INITIAL') 1335 return t 1336 1337# C or C++ comment (ignore) 1338def t_ccode_comment(t): 1339 r'(/\*(.|\n)*?*/)|(//.*)' 1340 pass 1341 1342# C string 1343def t_ccode_string(t): 1344 r'\"([^\\\n]|(\\.))*?\"' 1345 1346# C character literal 1347def t_ccode_char(t): 1348 r'\'([^\\\n]|(\\.))*?\'' 1349 1350# Any sequence of non-whitespace characters (not braces, strings) 1351def t_ccode_nonspace(t): 1352 r'[^\s\{\}\'\"]+' 1353 1354# Ignored characters (whitespace) 1355t_ccode_ignore = " \t\n" 1356 1357# For bad characters, we just skip over it 1358def t_ccode_error(t): 1359 t.lexer.skip(1) 1360</pre> 1361</blockquote> 1362 1363In this example, the occurrence of the first '{' causes the lexer to record the starting position and enter a new state <tt>'ccode'</tt>. A collection of rules then match 1364various parts of the input that follow (comments, strings, etc.). All of these rules merely discard the token (by not returning a value). 1365However, if the closing right brace is encountered, the rule <tt>t_ccode_rbrace</tt> collects all of the code (using the earlier recorded starting 1366position), stores it, and returns a token 'CCODE' containing all of that text. When returning the token, the lexing state is restored back to its 1367initial state. 1368 1369<H3><a name="ply_nn21"></a>4.19 Miscellaneous Issues</H3> 1370 1371 1372<P> 1373<li>The lexer requires input to be supplied as a single input string. Since most machines have more than enough memory, this 1374rarely presents a performance concern. However, it means that the lexer currently can't be used with streaming data 1375such as open files or sockets. This limitation is primarily a side-effect of using the <tt>re</tt> module. 1376 1377<p> 1378<li>The lexer should work properly with both Unicode strings given as token and pattern matching rules as 1379well as for input text. 1380 1381<p> 1382<li>If you need to supply optional flags to the re.compile() function, use the reflags option to lex. For example: 1383 1384<blockquote> 1385<pre> 1386lex.lex(reflags=re.UNICODE) 1387</pre> 1388</blockquote> 1389 1390<p> 1391<li>Since the lexer is written entirely in Python, its performance is 1392largely determined by that of the Python <tt>re</tt> module. Although 1393the lexer has been written to be as efficient as possible, it's not 1394blazingly fast when used on very large input files. If 1395performance is concern, you might consider upgrading to the most 1396recent version of Python, creating a hand-written lexer, or offloading 1397the lexer into a C extension module. 1398 1399<p> 1400If you are going to create a hand-written lexer and you plan to use it with <tt>yacc.py</tt>, 1401it only needs to conform to the following requirements: 1402 1403<ul> 1404<li>It must provide a <tt>token()</tt> method that returns the next token or <tt>None</tt> if no more 1405tokens are available. 1406<li>The <tt>token()</tt> method must return an object <tt>tok</tt> that has <tt>type</tt> and <tt>value</tt> attributes. 1407</ul> 1408 1409<H2><a name="ply_nn22"></a>5. Parsing basics</H2> 1410 1411 1412<tt>yacc.py</tt> is used to parse language syntax. Before showing an 1413example, there are a few important bits of background that must be 1414mentioned. First, <em>syntax</em> is usually specified in terms of a BNF grammar. 1415For example, if you wanted to parse 1416simple arithmetic expressions, you might first write an unambiguous 1417grammar specification like this: 1418 1419<blockquote> 1420<pre> 1421expression : expression + term 1422 | expression - term 1423 | term 1424 1425term : term * factor 1426 | term / factor 1427 | factor 1428 1429factor : NUMBER 1430 | ( expression ) 1431</pre> 1432</blockquote> 1433 1434In the grammar, symbols such as <tt>NUMBER</tt>, <tt>+</tt>, <tt>-</tt>, <tt>*</tt>, and <tt>/</tt> are known 1435as <em>terminals</em> and correspond to raw input tokens. Identifiers such as <tt>term</tt> and <tt>factor</tt> refer to 1436grammar rules comprised of a collection of terminals and other rules. These identifiers are known as <em>non-terminals</em>. 1437<P> 1438 1439The semantic behavior of a language is often specified using a 1440technique known as syntax directed translation. In syntax directed 1441translation, attributes are attached to each symbol in a given grammar 1442rule along with an action. Whenever a particular grammar rule is 1443recognized, the action describes what to do. For example, given the 1444expression grammar above, you might write the specification for a 1445simple calculator like this: 1446 1447<blockquote> 1448<pre> 1449Grammar Action 1450-------------------------------- -------------------------------------------- 1451expression0 : expression1 + term expression0.val = expression1.val + term.val 1452 | expression1 - term expression0.val = expression1.val - term.val 1453 | term expression0.val = term.val 1454 1455term0 : term1 * factor term0.val = term1.val * factor.val 1456 | term1 / factor term0.val = term1.val / factor.val 1457 | factor term0.val = factor.val 1458 1459factor : NUMBER factor.val = int(NUMBER.lexval) 1460 | ( expression ) factor.val = expression.val 1461</pre> 1462</blockquote> 1463 1464A good way to think about syntax directed translation is to 1465view each symbol in the grammar as a kind of object. Associated 1466with each symbol is a value representing its "state" (for example, the 1467<tt>val</tt> attribute above). Semantic 1468actions are then expressed as a collection of functions or methods 1469that operate on the symbols and associated values. 1470 1471<p> 1472Yacc uses a parsing technique known as LR-parsing or shift-reduce parsing. LR parsing is a 1473bottom up technique that tries to recognize the right-hand-side of various grammar rules. 1474Whenever a valid right-hand-side is found in the input, the appropriate action code is triggered and the 1475grammar symbols are replaced by the grammar symbol on the left-hand-side. 1476 1477<p> 1478LR parsing is commonly implemented by shifting grammar symbols onto a 1479stack and looking at the stack and the next input token for patterns that 1480match one of the grammar rules. 1481The details of the algorithm can be found in a compiler textbook, but the 1482following example illustrates the steps that are performed if you 1483wanted to parse the expression 1484<tt>3 + 5 * (10 - 20)</tt> using the grammar defined above. In the example, 1485the special symbol <tt>$</tt> represents the end of input. 1486 1487 1488<blockquote> 1489<pre> 1490Step Symbol Stack Input Tokens Action 1491---- --------------------- --------------------- ------------------------------- 14921 3 + 5 * ( 10 - 20 )$ Shift 3 14932 3 + 5 * ( 10 - 20 )$ Reduce factor : NUMBER 14943 factor + 5 * ( 10 - 20 )$ Reduce term : factor 14954 term + 5 * ( 10 - 20 )$ Reduce expr : term 14965 expr + 5 * ( 10 - 20 )$ Shift + 14976 expr + 5 * ( 10 - 20 )$ Shift 5 14987 expr + 5 * ( 10 - 20 )$ Reduce factor : NUMBER 14998 expr + factor * ( 10 - 20 )$ Reduce term : factor 15009 expr + term * ( 10 - 20 )$ Shift * 150110 expr + term * ( 10 - 20 )$ Shift ( 150211 expr + term * ( 10 - 20 )$ Shift 10 150312 expr + term * ( 10 - 20 )$ Reduce factor : NUMBER 150413 expr + term * ( factor - 20 )$ Reduce term : factor 150514 expr + term * ( term - 20 )$ Reduce expr : term 150615 expr + term * ( expr - 20 )$ Shift - 150716 expr + term * ( expr - 20 )$ Shift 20 150817 expr + term * ( expr - 20 )$ Reduce factor : NUMBER 150918 expr + term * ( expr - factor )$ Reduce term : factor 151019 expr + term * ( expr - term )$ Reduce expr : expr - term 151120 expr + term * ( expr )$ Shift ) 151221 expr + term * ( expr ) $ Reduce factor : (expr) 151322 expr + term * factor $ Reduce term : term * factor 151423 expr + term $ Reduce expr : expr + term 151524 expr $ Reduce expr 151625 $ Success! 1517</pre> 1518</blockquote> 1519 1520When parsing the expression, an underlying state machine and the 1521current input token determine what happens next. If the next token 1522looks like part of a valid grammar rule (based on other items on the 1523stack), it is generally shifted onto the stack. If the top of the 1524stack contains a valid right-hand-side of a grammar rule, it is 1525usually "reduced" and the symbols replaced with the symbol on the 1526left-hand-side. When this reduction occurs, the appropriate action is 1527triggered (if defined). If the input token can't be shifted and the 1528top of stack doesn't match any grammar rules, a syntax error has 1529occurred and the parser must take some kind of recovery step (or bail 1530out). A parse is only successful if the parser reaches a state where 1531the symbol stack is empty and there are no more input tokens. 1532 1533<p> 1534It is important to note that the underlying implementation is built 1535around a large finite-state machine that is encoded in a collection of 1536tables. The construction of these tables is non-trivial and 1537beyond the scope of this discussion. However, subtle details of this 1538process explain why, in the example above, the parser chooses to shift 1539a token onto the stack in step 9 rather than reducing the 1540rule <tt>expr : expr + term</tt>. 1541 1542<H2><a name="ply_nn23"></a>6. Yacc</H2> 1543 1544 1545The <tt>ply.yacc</tt> module implements the parsing component of PLY. 1546The name "yacc" stands for "Yet Another Compiler Compiler" and is 1547borrowed from the Unix tool of the same name. 1548 1549<H3><a name="ply_nn24"></a>6.1 An example</H3> 1550 1551 1552Suppose you wanted to make a grammar for simple arithmetic expressions as previously described. Here is 1553how you would do it with <tt>yacc.py</tt>: 1554 1555<blockquote> 1556<pre> 1557# Yacc example 1558 1559import ply.yacc as yacc 1560 1561# Get the token map from the lexer. This is required. 1562from calclex import tokens 1563 1564def p_expression_plus(p): 1565 'expression : expression PLUS term' 1566 p[0] = p[1] + p[3] 1567 1568def p_expression_minus(p): 1569 'expression : expression MINUS term' 1570 p[0] = p[1] - p[3] 1571 1572def p_expression_term(p): 1573 'expression : term' 1574 p[0] = p[1] 1575 1576def p_term_times(p): 1577 'term : term TIMES factor' 1578 p[0] = p[1] * p[3] 1579 1580def p_term_div(p): 1581 'term : term DIVIDE factor' 1582 p[0] = p[1] / p[3] 1583 1584def p_term_factor(p): 1585 'term : factor' 1586 p[0] = p[1] 1587 1588def p_factor_num(p): 1589 'factor : NUMBER' 1590 p[0] = p[1] 1591 1592def p_factor_expr(p): 1593 'factor : LPAREN expression RPAREN' 1594 p[0] = p[2] 1595 1596# Error rule for syntax errors 1597def p_error(p): 1598 print "Syntax error in input!" 1599 1600# Build the parser 1601parser = yacc.yacc() 1602 1603while True: 1604 try: 1605 s = raw_input('calc > ') 1606 except EOFError: 1607 break 1608 if not s: continue 1609 result = parser.parse(s) 1610 print result 1611</pre> 1612</blockquote> 1613 1614In this example, each grammar rule is defined by a Python function 1615where the docstring to that function contains the appropriate 1616context-free grammar specification. The statements that make up the 1617function body implement the semantic actions of the rule. Each function 1618accepts a single argument <tt>p</tt> that is a sequence containing the 1619values of each grammar symbol in the corresponding rule. The values 1620of <tt>p[i]</tt> are mapped to grammar symbols as shown here: 1621 1622<blockquote> 1623<pre> 1624def p_expression_plus(p): 1625 'expression : expression PLUS term' 1626 # ^ ^ ^ ^ 1627 # p[0] p[1] p[2] p[3] 1628 1629 p[0] = p[1] + p[3] 1630</pre> 1631</blockquote> 1632 1633<p> 1634For tokens, the "value" of the corresponding <tt>p[i]</tt> is the 1635<em>same</em> as the <tt>p.value</tt> attribute assigned in the lexer 1636module. For non-terminals, the value is determined by whatever is 1637placed in <tt>p[0]</tt> when rules are reduced. This value can be 1638anything at all. However, it probably most common for the value to be 1639a simple Python type, a tuple, or an instance. In this example, we 1640are relying on the fact that the <tt>NUMBER</tt> token stores an 1641integer value in its value field. All of the other rules simply 1642perform various types of integer operations and propagate the result. 1643</p> 1644 1645<p> 1646Note: The use of negative indices have a special meaning in 1647yacc---specially <tt>p[-1]</tt> does not have the same value 1648as <tt>p[3]</tt> in this example. Please see the section on "Embedded 1649Actions" for further details. 1650</p> 1651 1652<p> 1653The first rule defined in the yacc specification determines the 1654starting grammar symbol (in this case, a rule for <tt>expression</tt> 1655appears first). Whenever the starting rule is reduced by the parser 1656and no more input is available, parsing stops and the final value is 1657returned (this value will be whatever the top-most rule placed 1658in <tt>p[0]</tt>). Note: an alternative starting symbol can be 1659specified using the <tt>start</tt> keyword argument to 1660<tt>yacc()</tt>. 1661 1662<p>The <tt>p_error(p)</tt> rule is defined to catch syntax errors. 1663See the error handling section below for more detail. 1664 1665<p> 1666To build the parser, call the <tt>yacc.yacc()</tt> function. This 1667function looks at the module and attempts to construct all of the LR 1668parsing tables for the grammar you have specified. The first 1669time <tt>yacc.yacc()</tt> is invoked, you will get a message such as 1670this: 1671 1672<blockquote> 1673<pre> 1674$ python calcparse.py 1675Generating LALR tables 1676calc > 1677</pre> 1678</blockquote> 1679 1680Since table construction is relatively expensive (especially for large 1681grammars), the resulting parsing table is written to the current 1682directory in a file called <tt>parsetab.py</tt>. In addition, a 1683debugging file called <tt>parser.out</tt> is created. On subsequent 1684executions, <tt>yacc</tt> will reload the table from 1685<tt>parsetab.py</tt> unless it has detected a change in the underlying 1686grammar (in which case the tables and <tt>parsetab.py</tt> file are 1687regenerated). Note: The names of parser output files can be changed 1688if necessary. See the <a href="reference.html">PLY Reference</a> for details. 1689 1690<p> 1691If any errors are detected in your grammar specification, <tt>yacc.py</tt> will produce 1692diagnostic messages and possibly raise an exception. Some of the errors that can be detected include: 1693 1694<ul> 1695<li>Duplicated function names (if more than one rule function have the same name in the grammar file). 1696<li>Shift/reduce and reduce/reduce conflicts generated by ambiguous grammars. 1697<li>Badly specified grammar rules. 1698<li>Infinite recursion (rules that can never terminate). 1699<li>Unused rules and tokens 1700<li>Undefined rules and tokens 1701</ul> 1702 1703The next few sections discuss grammar specification in more detail. 1704 1705<p> 1706The final part of the example shows how to actually run the parser 1707created by 1708<tt>yacc()</tt>. To run the parser, you simply have to call 1709the <tt>parse()</tt> with a string of input text. This will run all 1710of the grammar rules and return the result of the entire parse. This 1711result return is the value assigned to <tt>p[0]</tt> in the starting 1712grammar rule. 1713 1714<H3><a name="ply_nn25"></a>6.2 Combining Grammar Rule Functions</H3> 1715 1716 1717When grammar rules are similar, they can be combined into a single function. 1718For example, consider the two rules in our earlier example: 1719 1720<blockquote> 1721<pre> 1722def p_expression_plus(p): 1723 'expression : expression PLUS term' 1724 p[0] = p[1] + p[3] 1725 1726def p_expression_minus(t): 1727 'expression : expression MINUS term' 1728 p[0] = p[1] - p[3] 1729</pre> 1730</blockquote> 1731 1732Instead of writing two functions, you might write a single function like this: 1733 1734<blockquote> 1735<pre> 1736def p_expression(p): 1737 '''expression : expression PLUS term 1738 | expression MINUS term''' 1739 if p[2] == '+': 1740 p[0] = p[1] + p[3] 1741 elif p[2] == '-': 1742 p[0] = p[1] - p[3] 1743</pre> 1744</blockquote> 1745 1746In general, the doc string for any given function can contain multiple grammar rules. So, it would 1747have also been legal (although possibly confusing) to write this: 1748 1749<blockquote> 1750<pre> 1751def p_binary_operators(p): 1752 '''expression : expression PLUS term 1753 | expression MINUS term 1754 term : term TIMES factor 1755 | term DIVIDE factor''' 1756 if p[2] == '+': 1757 p[0] = p[1] + p[3] 1758 elif p[2] == '-': 1759 p[0] = p[1] - p[3] 1760 elif p[2] == '*': 1761 p[0] = p[1] * p[3] 1762 elif p[2] == '/': 1763 p[0] = p[1] / p[3] 1764</pre> 1765</blockquote> 1766 1767When combining grammar rules into a single function, it is usually a good idea for all of the rules to have 1768a similar structure (e.g., the same number of terms). Otherwise, the corresponding action code may be more 1769complicated than necessary. However, it is possible to handle simple cases using len(). For example: 1770 1771<blockquote> 1772<pre> 1773def p_expressions(p): 1774 '''expression : expression MINUS expression 1775 | MINUS expression''' 1776 if (len(p) == 4): 1777 p[0] = p[1] - p[3] 1778 elif (len(p) == 3): 1779 p[0] = -p[2] 1780</pre> 1781</blockquote> 1782 1783If parsing performance is a concern, you should resist the urge to put 1784too much conditional processing into a single grammar rule as shown in 1785these examples. When you add checks to see which grammar rule is 1786being handled, you are actually duplicating the work that the parser 1787has already performed (i.e., the parser already knows exactly what rule it 1788matched). You can eliminate this overhead by using a 1789separate <tt>p_rule()</tt> function for each grammar rule. 1790 1791<H3><a name="ply_nn26"></a>6.3 Character Literals</H3> 1792 1793 1794If desired, a grammar may contain tokens defined as single character literals. For example: 1795 1796<blockquote> 1797<pre> 1798def p_binary_operators(p): 1799 '''expression : expression '+' term 1800 | expression '-' term 1801 term : term '*' factor 1802 | term '/' factor''' 1803 if p[2] == '+': 1804 p[0] = p[1] + p[3] 1805 elif p[2] == '-': 1806 p[0] = p[1] - p[3] 1807 elif p[2] == '*': 1808 p[0] = p[1] * p[3] 1809 elif p[2] == '/': 1810 p[0] = p[1] / p[3] 1811</pre> 1812</blockquote> 1813 1814A character literal must be enclosed in quotes such as <tt>'+'</tt>. In addition, if literals are used, they must be declared in the 1815corresponding <tt>lex</tt> file through the use of a special <tt>literals</tt> declaration. 1816 1817<blockquote> 1818<pre> 1819# Literals. Should be placed in module given to lex() 1820literals = ['+','-','*','/' ] 1821</pre> 1822</blockquote> 1823 1824<b>Character literals are limited to a single character</b>. Thus, it is not legal to specify literals such as <tt>'<='</tt> or <tt>'=='</tt>. For this, use 1825the normal lexing rules (e.g., define a rule such as <tt>t_EQ = r'=='</tt>). 1826 1827<H3><a name="ply_nn26"></a>6.4 Empty Productions</H3> 1828 1829 1830<tt>yacc.py</tt> can handle empty productions by defining a rule like this: 1831 1832<blockquote> 1833<pre> 1834def p_empty(p): 1835 'empty :' 1836 pass 1837</pre> 1838</blockquote> 1839 1840Now to use the empty production, simply use 'empty' as a symbol. For example: 1841 1842<blockquote> 1843<pre> 1844def p_optitem(p): 1845 'optitem : item' 1846 ' | empty' 1847 ... 1848</pre> 1849</blockquote> 1850 1851Note: You can write empty rules anywhere by simply specifying an empty 1852right hand side. However, I personally find that writing an "empty" 1853rule and using "empty" to denote an empty production is easier to read 1854and more clearly states your intentions. 1855 1856<H3><a name="ply_nn28"></a>6.5 Changing the starting symbol</H3> 1857 1858 1859Normally, the first rule found in a yacc specification defines the starting grammar rule (top level rule). To change this, simply 1860supply a <tt>start</tt> specifier in your file. For example: 1861 1862<blockquote> 1863<pre> 1864start = 'foo' 1865 1866def p_bar(p): 1867 'bar : A B' 1868 1869# This is the starting rule due to the start specifier above 1870def p_foo(p): 1871 'foo : bar X' 1872... 1873</pre> 1874</blockquote> 1875 1876The use of a <tt>start</tt> specifier may be useful during debugging 1877since you can use it to have yacc build a subset of a larger grammar. 1878For this purpose, it is also possible to specify a starting symbol as 1879an argument to <tt>yacc()</tt>. For example: 1880 1881<blockquote> 1882<pre> 1883yacc.yacc(start='foo') 1884</pre> 1885</blockquote> 1886 1887<H3><a name="ply_nn27"></a>6.6 Dealing With Ambiguous Grammars</H3> 1888 1889 1890The expression grammar given in the earlier example has been written 1891in a special format to eliminate ambiguity. However, in many 1892situations, it is extremely difficult or awkward to write grammars in 1893this format. A much more natural way to express the grammar is in a 1894more compact form like this: 1895 1896<blockquote> 1897<pre> 1898expression : expression PLUS expression 1899 | expression MINUS expression 1900 | expression TIMES expression 1901 | expression DIVIDE expression 1902 | LPAREN expression RPAREN 1903 | NUMBER 1904</pre> 1905</blockquote> 1906 1907Unfortunately, this grammar specification is ambiguous. For example, 1908if you are parsing the string "3 * 4 + 5", there is no way to tell how 1909the operators are supposed to be grouped. For example, does the 1910expression mean "(3 * 4) + 5" or is it "3 * (4+5)"? 1911 1912<p> 1913When an ambiguous grammar is given to <tt>yacc.py</tt> it will print 1914messages about "shift/reduce conflicts" or "reduce/reduce conflicts". 1915A shift/reduce conflict is caused when the parser generator can't 1916decide whether or not to reduce a rule or shift a symbol on the 1917parsing stack. For example, consider the string "3 * 4 + 5" and the 1918internal parsing stack: 1919 1920<blockquote> 1921<pre> 1922Step Symbol Stack Input Tokens Action 1923---- --------------------- --------------------- ------------------------------- 19241 $ 3 * 4 + 5$ Shift 3 19252 $ 3 * 4 + 5$ Reduce : expression : NUMBER 19263 $ expr * 4 + 5$ Shift * 19274 $ expr * 4 + 5$ Shift 4 19285 $ expr * 4 + 5$ Reduce: expression : NUMBER 19296 $ expr * expr + 5$ SHIFT/REDUCE CONFLICT ???? 1930</pre> 1931</blockquote> 1932 1933In this case, when the parser reaches step 6, it has two options. One 1934is to reduce the rule <tt>expr : expr * expr</tt> on the stack. The 1935other option is to shift the token <tt>+</tt> on the stack. Both 1936options are perfectly legal from the rules of the 1937context-free-grammar. 1938 1939<p> 1940By default, all shift/reduce conflicts are resolved in favor of 1941shifting. Therefore, in the above example, the parser will always 1942shift the <tt>+</tt> instead of reducing. Although this strategy 1943works in many cases (for example, the case of 1944"if-then" versus "if-then-else"), it is not enough for arithmetic expressions. In fact, 1945in the above example, the decision to shift <tt>+</tt> is completely 1946wrong---we should have reduced <tt>expr * expr</tt> since 1947multiplication has higher mathematical precedence than addition. 1948 1949<p>To resolve ambiguity, especially in expression 1950grammars, <tt>yacc.py</tt> allows individual tokens to be assigned a 1951precedence level and associativity. This is done by adding a variable 1952<tt>precedence</tt> to the grammar file like this: 1953 1954<blockquote> 1955<pre> 1956precedence = ( 1957 ('left', 'PLUS', 'MINUS'), 1958 ('left', 'TIMES', 'DIVIDE'), 1959) 1960</pre> 1961</blockquote> 1962 1963This declaration specifies that <tt>PLUS</tt>/<tt>MINUS</tt> have the 1964same precedence level and are left-associative and that 1965<tt>TIMES</tt>/<tt>DIVIDE</tt> have the same precedence and are 1966left-associative. Within the <tt>precedence</tt> declaration, tokens 1967are ordered from lowest to highest precedence. Thus, this declaration 1968specifies that <tt>TIMES</tt>/<tt>DIVIDE</tt> have higher precedence 1969than <tt>PLUS</tt>/<tt>MINUS</tt> (since they appear later in the 1970precedence specification). 1971 1972<p> 1973The precedence specification works by associating a numerical 1974precedence level value and associativity direction to the listed 1975tokens. For example, in the above example you get: 1976 1977<blockquote> 1978<pre> 1979PLUS : level = 1, assoc = 'left' 1980MINUS : level = 1, assoc = 'left' 1981TIMES : level = 2, assoc = 'left' 1982DIVIDE : level = 2, assoc = 'left' 1983</pre> 1984</blockquote> 1985 1986These values are then used to attach a numerical precedence value and 1987associativity direction to each grammar rule. <em>This is always 1988determined by looking at the precedence of the right-most terminal 1989symbol.</em> For example: 1990 1991<blockquote> 1992<pre> 1993expression : expression PLUS expression # level = 1, left 1994 | expression MINUS expression # level = 1, left 1995 | expression TIMES expression # level = 2, left 1996 | expression DIVIDE expression # level = 2, left 1997 | LPAREN expression RPAREN # level = None (not specified) 1998 | NUMBER # level = None (not specified) 1999</pre> 2000</blockquote> 2001 2002When shift/reduce conflicts are encountered, the parser generator resolves the conflict by 2003looking at the precedence rules and associativity specifiers. 2004 2005<p> 2006<ol> 2007<li>If the current token has higher precedence than the rule on the stack, it is shifted. 2008<li>If the grammar rule on the stack has higher precedence, the rule is reduced. 2009<li>If the current token and the grammar rule have the same precedence, the 2010rule is reduced for left associativity, whereas the token is shifted for right associativity. 2011<li>If nothing is known about the precedence, shift/reduce conflicts are resolved in 2012favor of shifting (the default). 2013</ol> 2014 2015For example, if "expression PLUS expression" has been parsed and the 2016next token is "TIMES", the action is going to be a shift because 2017"TIMES" has a higher precedence level than "PLUS". On the other hand, 2018if "expression TIMES expression" has been parsed and the next token is 2019"PLUS", the action is going to be reduce because "PLUS" has a lower 2020precedence than "TIMES." 2021 2022<p> 2023When shift/reduce conflicts are resolved using the first three 2024techniques (with the help of precedence rules), <tt>yacc.py</tt> will 2025report no errors or conflicts in the grammar (although it will print 2026some information in the <tt>parser.out</tt> debugging file). 2027 2028<p> 2029One problem with the precedence specifier technique is that it is 2030sometimes necessary to change the precedence of an operator in certain 2031contexts. For example, consider a unary-minus operator in "3 + 4 * 2032-5". Mathematically, the unary minus is normally given a very high 2033precedence--being evaluated before the multiply. However, in our 2034precedence specifier, MINUS has a lower precedence than TIMES. To 2035deal with this, precedence rules can be given for so-called "fictitious tokens" 2036like this: 2037 2038<blockquote> 2039<pre> 2040precedence = ( 2041 ('left', 'PLUS', 'MINUS'), 2042 ('left', 'TIMES', 'DIVIDE'), 2043 ('right', 'UMINUS'), # Unary minus operator 2044) 2045</pre> 2046</blockquote> 2047 2048Now, in the grammar file, we can write our unary minus rule like this: 2049 2050<blockquote> 2051<pre> 2052def p_expr_uminus(p): 2053 'expression : MINUS expression %prec UMINUS' 2054 p[0] = -p[2] 2055</pre> 2056</blockquote> 2057 2058In this case, <tt>%prec UMINUS</tt> overrides the default rule precedence--setting it to that 2059of UMINUS in the precedence specifier. 2060 2061<p> 2062At first, the use of UMINUS in this example may appear very confusing. 2063UMINUS is not an input token or a grammer rule. Instead, you should 2064think of it as the name of a special marker in the precedence table. When you use the <tt>%prec</tt> qualifier, you're simply 2065telling yacc that you want the precedence of the expression to be the same as for this special marker instead of the usual precedence. 2066 2067<p> 2068It is also possible to specify non-associativity in the <tt>precedence</tt> table. This would 2069be used when you <em>don't</em> want operations to chain together. For example, suppose 2070you wanted to support comparison operators like <tt><</tt> and <tt>></tt> but you didn't want to allow 2071combinations like <tt>a < b < c</tt>. To do this, simply specify a rule like this: 2072 2073<blockquote> 2074<pre> 2075precedence = ( 2076 ('nonassoc', 'LESSTHAN', 'GREATERTHAN'), # Nonassociative operators 2077 ('left', 'PLUS', 'MINUS'), 2078 ('left', 'TIMES', 'DIVIDE'), 2079 ('right', 'UMINUS'), # Unary minus operator 2080) 2081</pre> 2082</blockquote> 2083 2084<p> 2085If you do this, the occurrence of input text such as <tt> a < b < c</tt> will result in a syntax error. However, simple 2086expressions such as <tt>a < b</tt> will still be fine. 2087 2088<p> 2089Reduce/reduce conflicts are caused when there are multiple grammar 2090rules that can be applied to a given set of symbols. This kind of 2091conflict is almost always bad and is always resolved by picking the 2092rule that appears first in the grammar file. Reduce/reduce conflicts 2093are almost always caused when different sets of grammar rules somehow 2094generate the same set of symbols. For example: 2095 2096<blockquote> 2097<pre> 2098assignment : ID EQUALS NUMBER 2099 | ID EQUALS expression 2100 2101expression : expression PLUS expression 2102 | expression MINUS expression 2103 | expression TIMES expression 2104 | expression DIVIDE expression 2105 | LPAREN expression RPAREN 2106 | NUMBER 2107</pre> 2108</blockquote> 2109 2110In this case, a reduce/reduce conflict exists between these two rules: 2111 2112<blockquote> 2113<pre> 2114assignment : ID EQUALS NUMBER 2115expression : NUMBER 2116</pre> 2117</blockquote> 2118 2119For example, if you wrote "a = 5", the parser can't figure out if this 2120is supposed to be reduced as <tt>assignment : ID EQUALS NUMBER</tt> or 2121whether it's supposed to reduce the 5 as an expression and then reduce 2122the rule <tt>assignment : ID EQUALS expression</tt>. 2123 2124<p> 2125It should be noted that reduce/reduce conflicts are notoriously 2126difficult to spot simply looking at the input grammer. When a 2127reduce/reduce conflict occurs, <tt>yacc()</tt> will try to help by 2128printing a warning message such as this: 2129 2130<blockquote> 2131<pre> 2132WARNING: 1 reduce/reduce conflict 2133WARNING: reduce/reduce conflict in state 15 resolved using rule (assignment -> ID EQUALS NUMBER) 2134WARNING: rejected rule (expression -> NUMBER) 2135</pre> 2136</blockquote> 2137 2138This message identifies the two rules that are in conflict. However, 2139it may not tell you how the parser arrived at such a state. To try 2140and figure it out, you'll probably have to look at your grammar and 2141the contents of the 2142<tt>parser.out</tt> debugging file with an appropriately high level of 2143caffeination. 2144 2145<H3><a name="ply_nn28"></a>6.7 The parser.out file</H3> 2146 2147 2148Tracking down shift/reduce and reduce/reduce conflicts is one of the finer pleasures of using an LR 2149parsing algorithm. To assist in debugging, <tt>yacc.py</tt> creates a debugging file called 2150'parser.out' when it generates the parsing table. The contents of this file look like the following: 2151 2152<blockquote> 2153<pre> 2154Unused terminals: 2155 2156 2157Grammar 2158 2159Rule 1 expression -> expression PLUS expression 2160Rule 2 expression -> expression MINUS expression 2161Rule 3 expression -> expression TIMES expression 2162Rule 4 expression -> expression DIVIDE expression 2163Rule 5 expression -> NUMBER 2164Rule 6 expression -> LPAREN expression RPAREN 2165 2166Terminals, with rules where they appear 2167 2168TIMES : 3 2169error : 2170MINUS : 2 2171RPAREN : 6 2172LPAREN : 6 2173DIVIDE : 4 2174PLUS : 1 2175NUMBER : 5 2176 2177Nonterminals, with rules where they appear 2178 2179expression : 1 1 2 2 3 3 4 4 6 0 2180 2181 2182Parsing method: LALR 2183 2184 2185state 0 2186 2187 S' -> . expression 2188 expression -> . expression PLUS expression 2189 expression -> . expression MINUS expression 2190 expression -> . expression TIMES expression 2191 expression -> . expression DIVIDE expression 2192 expression -> . NUMBER 2193 expression -> . LPAREN expression RPAREN 2194 2195 NUMBER shift and go to state 3 2196 LPAREN shift and go to state 2 2197 2198 2199state 1 2200 2201 S' -> expression . 2202 expression -> expression . PLUS expression 2203 expression -> expression . MINUS expression 2204 expression -> expression . TIMES expression 2205 expression -> expression . DIVIDE expression 2206 2207 PLUS shift and go to state 6 2208 MINUS shift and go to state 5 2209 TIMES shift and go to state 4 2210 DIVIDE shift and go to state 7 2211 2212 2213state 2 2214 2215 expression -> LPAREN . expression RPAREN 2216 expression -> . expression PLUS expression 2217 expression -> . expression MINUS expression 2218 expression -> . expression TIMES expression 2219 expression -> . expression DIVIDE expression 2220 expression -> . NUMBER 2221 expression -> . LPAREN expression RPAREN 2222 2223 NUMBER shift and go to state 3 2224 LPAREN shift and go to state 2 2225 2226 2227state 3 2228 2229 expression -> NUMBER . 2230 2231 $ reduce using rule 5 2232 PLUS reduce using rule 5 2233 MINUS reduce using rule 5 2234 TIMES reduce using rule 5 2235 DIVIDE reduce using rule 5 2236 RPAREN reduce using rule 5 2237 2238 2239state 4 2240 2241 expression -> expression TIMES . expression 2242 expression -> . expression PLUS expression 2243 expression -> . expression MINUS expression 2244 expression -> . expression TIMES expression 2245 expression -> . expression DIVIDE expression 2246 expression -> . NUMBER 2247 expression -> . LPAREN expression RPAREN 2248 2249 NUMBER shift and go to state 3 2250 LPAREN shift and go to state 2 2251 2252 2253state 5 2254 2255 expression -> expression MINUS . expression 2256 expression -> . expression PLUS expression 2257 expression -> . expression MINUS expression 2258 expression -> . expression TIMES expression 2259 expression -> . expression DIVIDE expression 2260 expression -> . NUMBER 2261 expression -> . LPAREN expression RPAREN 2262 2263 NUMBER shift and go to state 3 2264 LPAREN shift and go to state 2 2265 2266 2267state 6 2268 2269 expression -> expression PLUS . expression 2270 expression -> . expression PLUS expression 2271 expression -> . expression MINUS expression 2272 expression -> . expression TIMES expression 2273 expression -> . expression DIVIDE expression 2274 expression -> . NUMBER 2275 expression -> . LPAREN expression RPAREN 2276 2277 NUMBER shift and go to state 3 2278 LPAREN shift and go to state 2 2279 2280 2281state 7 2282 2283 expression -> expression DIVIDE . expression 2284 expression -> . expression PLUS expression 2285 expression -> . expression MINUS expression 2286 expression -> . expression TIMES expression 2287 expression -> . expression DIVIDE expression 2288 expression -> . NUMBER 2289 expression -> . LPAREN expression RPAREN 2290 2291 NUMBER shift and go to state 3 2292 LPAREN shift and go to state 2 2293 2294 2295state 8 2296 2297 expression -> LPAREN expression . RPAREN 2298 expression -> expression . PLUS expression 2299 expression -> expression . MINUS expression 2300 expression -> expression . TIMES expression 2301 expression -> expression . DIVIDE expression 2302 2303 RPAREN shift and go to state 13 2304 PLUS shift and go to state 6 2305 MINUS shift and go to state 5 2306 TIMES shift and go to state 4 2307 DIVIDE shift and go to state 7 2308 2309 2310state 9 2311 2312 expression -> expression TIMES expression . 2313 expression -> expression . PLUS expression 2314 expression -> expression . MINUS expression 2315 expression -> expression . TIMES expression 2316 expression -> expression . DIVIDE expression 2317 2318 $ reduce using rule 3 2319 PLUS reduce using rule 3 2320 MINUS reduce using rule 3 2321 TIMES reduce using rule 3 2322 DIVIDE reduce using rule 3 2323 RPAREN reduce using rule 3 2324 2325 ! PLUS [ shift and go to state 6 ] 2326 ! MINUS [ shift and go to state 5 ] 2327 ! TIMES [ shift and go to state 4 ] 2328 ! DIVIDE [ shift and go to state 7 ] 2329 2330state 10 2331 2332 expression -> expression MINUS expression . 2333 expression -> expression . PLUS expression 2334 expression -> expression . MINUS expression 2335 expression -> expression . TIMES expression 2336 expression -> expression . DIVIDE expression 2337 2338 $ reduce using rule 2 2339 PLUS reduce using rule 2 2340 MINUS reduce using rule 2 2341 RPAREN reduce using rule 2 2342 TIMES shift and go to state 4 2343 DIVIDE shift and go to state 7 2344 2345 ! TIMES [ reduce using rule 2 ] 2346 ! DIVIDE [ reduce using rule 2 ] 2347 ! PLUS [ shift and go to state 6 ] 2348 ! MINUS [ shift and go to state 5 ] 2349 2350state 11 2351 2352 expression -> expression PLUS expression . 2353 expression -> expression . PLUS expression 2354 expression -> expression . MINUS expression 2355 expression -> expression . TIMES expression 2356 expression -> expression . DIVIDE expression 2357 2358 $ reduce using rule 1 2359 PLUS reduce using rule 1 2360 MINUS reduce using rule 1 2361 RPAREN reduce using rule 1 2362 TIMES shift and go to state 4 2363 DIVIDE shift and go to state 7 2364 2365 ! TIMES [ reduce using rule 1 ] 2366 ! DIVIDE [ reduce using rule 1 ] 2367 ! PLUS [ shift and go to state 6 ] 2368 ! MINUS [ shift and go to state 5 ] 2369 2370state 12 2371 2372 expression -> expression DIVIDE expression . 2373 expression -> expression . PLUS expression 2374 expression -> expression . MINUS expression 2375 expression -> expression . TIMES expression 2376 expression -> expression . DIVIDE expression 2377 2378 $ reduce using rule 4 2379 PLUS reduce using rule 4 2380 MINUS reduce using rule 4 2381 TIMES reduce using rule 4 2382 DIVIDE reduce using rule 4 2383 RPAREN reduce using rule 4 2384 2385 ! PLUS [ shift and go to state 6 ] 2386 ! MINUS [ shift and go to state 5 ] 2387 ! TIMES [ shift and go to state 4 ] 2388 ! DIVIDE [ shift and go to state 7 ] 2389 2390state 13 2391 2392 expression -> LPAREN expression RPAREN . 2393 2394 $ reduce using rule 6 2395 PLUS reduce using rule 6 2396 MINUS reduce using rule 6 2397 TIMES reduce using rule 6 2398 DIVIDE reduce using rule 6 2399 RPAREN reduce using rule 6 2400</pre> 2401</blockquote> 2402 2403The different states that appear in this file are a representation of 2404every possible sequence of valid input tokens allowed by the grammar. 2405When receiving input tokens, the parser is building up a stack and 2406looking for matching rules. Each state keeps track of the grammar 2407rules that might be in the process of being matched at that point. Within each 2408rule, the "." character indicates the current location of the parse 2409within that rule. In addition, the actions for each valid input token 2410are listed. When a shift/reduce or reduce/reduce conflict arises, 2411rules <em>not</em> selected are prefixed with an !. For example: 2412 2413<blockquote> 2414<pre> 2415 ! TIMES [ reduce using rule 2 ] 2416 ! DIVIDE [ reduce using rule 2 ] 2417 ! PLUS [ shift and go to state 6 ] 2418 ! MINUS [ shift and go to state 5 ] 2419</pre> 2420</blockquote> 2421 2422By looking at these rules (and with a little practice), you can usually track down the source 2423of most parsing conflicts. It should also be stressed that not all shift-reduce conflicts are 2424bad. However, the only way to be sure that they are resolved correctly is to look at <tt>parser.out</tt>. 2425 2426<H3><a name="ply_nn29"></a>6.8 Syntax Error Handling</H3> 2427 2428 2429If you are creating a parser for production use, the handling of 2430syntax errors is important. As a general rule, you don't want a 2431parser to simply throw up its hands and stop at the first sign of 2432trouble. Instead, you want it to report the error, recover if possible, and 2433continue parsing so that all of the errors in the input get reported 2434to the user at once. This is the standard behavior found in compilers 2435for languages such as C, C++, and Java. 2436 2437In PLY, when a syntax error occurs during parsing, the error is immediately 2438detected (i.e., the parser does not read any more tokens beyond the 2439source of the error). However, at this point, the parser enters a 2440recovery mode that can be used to try and continue further parsing. 2441As a general rule, error recovery in LR parsers is a delicate 2442topic that involves ancient rituals and black-magic. The recovery mechanism 2443provided by <tt>yacc.py</tt> is comparable to Unix yacc so you may want 2444consult a book like O'Reilly's "Lex and Yacc" for some of the finer details. 2445 2446<p> 2447When a syntax error occurs, <tt>yacc.py</tt> performs the following steps: 2448 2449<ol> 2450<li>On the first occurrence of an error, the user-defined <tt>p_error()</tt> function 2451is called with the offending token as an argument. However, if the syntax error is due to 2452reaching the end-of-file, <tt>p_error()</tt> is called with an argument of <tt>None</tt>. 2453Afterwards, the parser enters 2454an "error-recovery" mode in which it will not make future calls to <tt>p_error()</tt> until it 2455has successfully shifted at least 3 tokens onto the parsing stack. 2456 2457<p> 2458<li>If no recovery action is taken in <tt>p_error()</tt>, the offending lookahead token is replaced 2459with a special <tt>error</tt> token. 2460 2461<p> 2462<li>If the offending lookahead token is already set to <tt>error</tt>, the top item of the parsing stack is 2463deleted. 2464 2465<p> 2466<li>If the entire parsing stack is unwound, the parser enters a restart state and attempts to start 2467parsing from its initial state. 2468 2469<p> 2470<li>If a grammar rule accepts <tt>error</tt> as a token, it will be 2471shifted onto the parsing stack. 2472 2473<p> 2474<li>If the top item of the parsing stack is <tt>error</tt>, lookahead tokens will be discarded until the 2475parser can successfully shift a new symbol or reduce a rule involving <tt>error</tt>. 2476</ol> 2477 2478<H4><a name="ply_nn30"></a>6.8.1 Recovery and resynchronization with error rules</H4> 2479 2480 2481The most well-behaved approach for handling syntax errors is to write grammar rules that include the <tt>error</tt> 2482token. For example, suppose your language had a grammar rule for a print statement like this: 2483 2484<blockquote> 2485<pre> 2486def p_statement_print(p): 2487 'statement : PRINT expr SEMI' 2488 ... 2489</pre> 2490</blockquote> 2491 2492To account for the possibility of a bad expression, you might write an additional grammar rule like this: 2493 2494<blockquote> 2495<pre> 2496def p_statement_print_error(p): 2497 'statement : PRINT error SEMI' 2498 print "Syntax error in print statement. Bad expression" 2499 2500</pre> 2501</blockquote> 2502 2503In this case, the <tt>error</tt> token will match any sequence of 2504tokens that might appear up to the first semicolon that is 2505encountered. Once the semicolon is reached, the rule will be 2506invoked and the <tt>error</tt> token will go away. 2507 2508<p> 2509This type of recovery is sometimes known as parser resynchronization. 2510The <tt>error</tt> token acts as a wildcard for any bad input text and 2511the token immediately following <tt>error</tt> acts as a 2512synchronization token. 2513 2514<p> 2515It is important to note that the <tt>error</tt> token usually does not appear as the last token 2516on the right in an error rule. For example: 2517 2518<blockquote> 2519<pre> 2520def p_statement_print_error(p): 2521 'statement : PRINT error' 2522 print "Syntax error in print statement. Bad expression" 2523</pre> 2524</blockquote> 2525 2526This is because the first bad token encountered will cause the rule to 2527be reduced--which may make it difficult to recover if more bad tokens 2528immediately follow. 2529 2530<H4><a name="ply_nn31"></a>6.8.2 Panic mode recovery</H4> 2531 2532 2533An alternative error recovery scheme is to enter a panic mode recovery in which tokens are 2534discarded to a point where the parser might be able to recover in some sensible manner. 2535 2536<p> 2537Panic mode recovery is implemented entirely in the <tt>p_error()</tt> function. For example, this 2538function starts discarding tokens until it reaches a closing '}'. Then, it restarts the 2539parser in its initial state. 2540 2541<blockquote> 2542<pre> 2543def p_error(p): 2544 print "Whoa. You are seriously hosed." 2545 # Read ahead looking for a closing '}' 2546 while 1: 2547 tok = yacc.token() # Get the next token 2548 if not tok or tok.type == 'RBRACE': break 2549 yacc.restart() 2550</pre> 2551</blockquote> 2552 2553<p> 2554This function simply discards the bad token and tells the parser that the error was ok. 2555 2556<blockquote> 2557<pre> 2558def p_error(p): 2559 print "Syntax error at token", p.type 2560 # Just discard the token and tell the parser it's okay. 2561 yacc.errok() 2562</pre> 2563</blockquote> 2564 2565<P> 2566Within the <tt>p_error()</tt> function, three functions are available to control the behavior 2567of the parser: 2568<p> 2569<ul> 2570<li><tt>yacc.errok()</tt>. This resets the parser state so it doesn't think it's in error-recovery 2571mode. This will prevent an <tt>error</tt> token from being generated and will reset the internal 2572error counters so that the next syntax error will call <tt>p_error()</tt> again. 2573 2574<p> 2575<li><tt>yacc.token()</tt>. This returns the next token on the input stream. 2576 2577<p> 2578<li><tt>yacc.restart()</tt>. This discards the entire parsing stack and resets the parser 2579to its initial state. 2580</ul> 2581 2582Note: these functions are only available when invoking <tt>p_error()</tt> and are not available 2583at any other time. 2584 2585<p> 2586To supply the next lookahead token to the parser, <tt>p_error()</tt> can return a token. This might be 2587useful if trying to synchronize on special characters. For example: 2588 2589<blockquote> 2590<pre> 2591def p_error(p): 2592 # Read ahead looking for a terminating ";" 2593 while 1: 2594 tok = yacc.token() # Get the next token 2595 if not tok or tok.type == 'SEMI': break 2596 yacc.errok() 2597 2598 # Return SEMI to the parser as the next lookahead token 2599 return tok 2600</pre> 2601</blockquote> 2602 2603<H4><a name="ply_nn35"></a>6.8.3 Signaling an error from a production</H4> 2604 2605 2606If necessary, a production rule can manually force the parser to enter error recovery. This 2607is done by raising the <tt>SyntaxError</tt> exception like this: 2608 2609<blockquote> 2610<pre> 2611def p_production(p): 2612 'production : some production ...' 2613 raise SyntaxError 2614</pre> 2615</blockquote> 2616 2617The effect of raising <tt>SyntaxError</tt> is the same as if the last symbol shifted onto the 2618parsing stack was actually a syntax error. Thus, when you do this, the last symbol shifted is popped off 2619of the parsing stack and the current lookahead token is set to an <tt>error</tt> token. The parser 2620then enters error-recovery mode where it tries to reduce rules that can accept <tt>error</tt> tokens. 2621The steps that follow from this point are exactly the same as if a syntax error were detected and 2622<tt>p_error()</tt> were called. 2623 2624<P> 2625One important aspect of manually setting an error is that the <tt>p_error()</tt> function will <b>NOT</b> be 2626called in this case. If you need to issue an error message, make sure you do it in the production that 2627raises <tt>SyntaxError</tt>. 2628 2629<P> 2630Note: This feature of PLY is meant to mimic the behavior of the YYERROR macro in yacc. 2631 2632 2633<H4><a name="ply_nn32"></a>6.8.4 General comments on error handling</H4> 2634 2635 2636For normal types of languages, error recovery with error rules and resynchronization characters is probably the most reliable 2637technique. This is because you can instrument the grammar to catch errors at selected places where it is relatively easy 2638to recover and continue parsing. Panic mode recovery is really only useful in certain specialized applications where you might want 2639to discard huge portions of the input text to find a valid restart point. 2640 2641<H3><a name="ply_nn33"></a>6.9 Line Number and Position Tracking</H3> 2642 2643 2644Position tracking is often a tricky problem when writing compilers. 2645By default, PLY tracks the line number and position of all tokens. 2646This information is available using the following functions: 2647 2648<ul> 2649<li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em> 2650<li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em> 2651</ul> 2652 2653For example: 2654 2655<blockquote> 2656<pre> 2657def p_expression(p): 2658 'expression : expression PLUS expression' 2659 line = p.lineno(2) # line number of the PLUS token 2660 index = p.lexpos(2) # Position of the PLUS token 2661</pre> 2662</blockquote> 2663 2664As an optional feature, <tt>yacc.py</tt> can automatically track line 2665numbers and positions for all of the grammar symbols as well. 2666However, this extra tracking requires extra processing and can 2667significantly slow down parsing. Therefore, it must be enabled by 2668passing the 2669<tt>tracking=True</tt> option to <tt>yacc.parse()</tt>. For example: 2670 2671<blockquote> 2672<pre> 2673yacc.parse(data,tracking=True) 2674</pre> 2675</blockquote> 2676 2677Once enabled, the <tt>lineno()</tt> and <tt>lexpos()</tt> methods work 2678for all grammar symbols. In addition, two additional methods can be 2679used: 2680 2681<ul> 2682<li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>. 2683<li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>. 2684</ul> 2685 2686For example: 2687 2688<blockquote> 2689<pre> 2690def p_expression(p): 2691 'expression : expression PLUS expression' 2692 p.lineno(1) # Line number of the left expression 2693 p.lineno(2) # line number of the PLUS operator 2694 p.lineno(3) # line number of the right expression 2695 ... 2696 start,end = p.linespan(3) # Start,end lines of the right expression 2697 starti,endi = p.lexspan(3) # Start,end positions of right expression 2698 2699</pre> 2700</blockquote> 2701 2702Note: The <tt>lexspan()</tt> function only returns the range of values up to the start of the last grammar symbol. 2703 2704<p> 2705Although it may be convenient for PLY to track position information on 2706all grammar symbols, this is often unnecessary. For example, if you 2707are merely using line number information in an error message, you can 2708often just key off of a specific token in the grammar rule. For 2709example: 2710 2711<blockquote> 2712<pre> 2713def p_bad_func(p): 2714 'funccall : fname LPAREN error RPAREN' 2715 # Line number reported from LPAREN token 2716 print "Bad function call at line", p.lineno(2) 2717</pre> 2718</blockquote> 2719 2720<p> 2721Similarly, you may get better parsing performance if you only 2722selectively propagate line number information where it's needed using 2723the <tt>p.set_lineno()</tt> method. For example: 2724 2725<blockquote> 2726<pre> 2727def p_fname(p): 2728 'fname : ID' 2729 p[0] = p[1] 2730 p.set_lineno(0,p.lineno(1)) 2731</pre> 2732</blockquote> 2733 2734PLY doesn't retain line number information from rules that have already been 2735parsed. If you are building an abstract syntax tree and need to have line numbers, 2736you should make sure that the line numbers appear in the tree itself. 2737 2738<H3><a name="ply_nn34"></a>6.10 AST Construction</H3> 2739 2740 2741<tt>yacc.py</tt> provides no special functions for constructing an 2742abstract syntax tree. However, such construction is easy enough to do 2743on your own. 2744 2745<p>A minimal way to construct a tree is to simply create and 2746propagate a tuple or list in each grammar rule function. There 2747are many possible ways to do this, but one example would be something 2748like this: 2749 2750<blockquote> 2751<pre> 2752def p_expression_binop(p): 2753 '''expression : expression PLUS expression 2754 | expression MINUS expression 2755 | expression TIMES expression 2756 | expression DIVIDE expression''' 2757 2758 p[0] = ('binary-expression',p[2],p[1],p[3]) 2759 2760def p_expression_group(p): 2761 'expression : LPAREN expression RPAREN' 2762 p[0] = ('group-expression',p[2]) 2763 2764def p_expression_number(p): 2765 'expression : NUMBER' 2766 p[0] = ('number-expression',p[1]) 2767</pre> 2768</blockquote> 2769 2770<p> 2771Another approach is to create a set of data structure for different 2772kinds of abstract syntax tree nodes and assign nodes to <tt>p[0]</tt> 2773in each rule. For example: 2774 2775<blockquote> 2776<pre> 2777class Expr: pass 2778 2779class BinOp(Expr): 2780 def __init__(self,left,op,right): 2781 self.type = "binop" 2782 self.left = left 2783 self.right = right 2784 self.op = op 2785 2786class Number(Expr): 2787 def __init__(self,value): 2788 self.type = "number" 2789 self.value = value 2790 2791def p_expression_binop(p): 2792 '''expression : expression PLUS expression 2793 | expression MINUS expression 2794 | expression TIMES expression 2795 | expression DIVIDE expression''' 2796 2797 p[0] = BinOp(p[1],p[2],p[3]) 2798 2799def p_expression_group(p): 2800 'expression : LPAREN expression RPAREN' 2801 p[0] = p[2] 2802 2803def p_expression_number(p): 2804 'expression : NUMBER' 2805 p[0] = Number(p[1]) 2806</pre> 2807</blockquote> 2808 2809The advantage to this approach is that it may make it easier to attach more complicated 2810semantics, type checking, code generation, and other features to the node classes. 2811 2812<p> 2813To simplify tree traversal, it may make sense to pick a very generic 2814tree structure for your parse tree nodes. For example: 2815 2816<blockquote> 2817<pre> 2818class Node: 2819 def __init__(self,type,children=None,leaf=None): 2820 self.type = type 2821 if children: 2822 self.children = children 2823 else: 2824 self.children = [ ] 2825 self.leaf = leaf 2826 2827def p_expression_binop(p): 2828 '''expression : expression PLUS expression 2829 | expression MINUS expression 2830 | expression TIMES expression 2831 | expression DIVIDE expression''' 2832 2833 p[0] = Node("binop", [p[1],p[3]], p[2]) 2834</pre> 2835</blockquote> 2836 2837<H3><a name="ply_nn35"></a>6.11 Embedded Actions</H3> 2838 2839 2840The parsing technique used by yacc only allows actions to be executed at the end of a rule. For example, 2841suppose you have a rule like this: 2842 2843<blockquote> 2844<pre> 2845def p_foo(p): 2846 "foo : A B C D" 2847 print "Parsed a foo", p[1],p[2],p[3],p[4] 2848</pre> 2849</blockquote> 2850 2851<p> 2852In this case, the supplied action code only executes after all of the 2853symbols <tt>A</tt>, <tt>B</tt>, <tt>C</tt>, and <tt>D</tt> have been 2854parsed. Sometimes, however, it is useful to execute small code 2855fragments during intermediate stages of parsing. For example, suppose 2856you wanted to perform some action immediately after <tt>A</tt> has 2857been parsed. To do this, write an empty rule like this: 2858 2859<blockquote> 2860<pre> 2861def p_foo(p): 2862 "foo : A seen_A B C D" 2863 print "Parsed a foo", p[1],p[3],p[4],p[5] 2864 print "seen_A returned", p[2] 2865 2866def p_seen_A(p): 2867 "seen_A :" 2868 print "Saw an A = ", p[-1] # Access grammar symbol to left 2869 p[0] = some_value # Assign value to seen_A 2870 2871</pre> 2872</blockquote> 2873 2874<p> 2875In this example, the empty <tt>seen_A</tt> rule executes immediately 2876after <tt>A</tt> is shifted onto the parsing stack. Within this 2877rule, <tt>p[-1]</tt> refers to the symbol on the stack that appears 2878immediately to the left of the <tt>seen_A</tt> symbol. In this case, 2879it would be the value of <tt>A</tt> in the <tt>foo</tt> rule 2880immediately above. Like other rules, a value can be returned from an 2881embedded action by simply assigning it to <tt>p[0]</tt> 2882 2883<p> 2884The use of embedded actions can sometimes introduce extra shift/reduce conflicts. For example, 2885this grammar has no conflicts: 2886 2887<blockquote> 2888<pre> 2889def p_foo(p): 2890 """foo : abcd 2891 | abcx""" 2892 2893def p_abcd(p): 2894 "abcd : A B C D" 2895 2896def p_abcx(p): 2897 "abcx : A B C X" 2898</pre> 2899</blockquote> 2900 2901However, if you insert an embedded action into one of the rules like this, 2902 2903<blockquote> 2904<pre> 2905def p_foo(p): 2906 """foo : abcd 2907 | abcx""" 2908 2909def p_abcd(p): 2910 "abcd : A B C D" 2911 2912def p_abcx(p): 2913 "abcx : A B seen_AB C X" 2914 2915def p_seen_AB(p): 2916 "seen_AB :" 2917</pre> 2918</blockquote> 2919 2920an extra shift-reduce conflict will be introduced. This conflict is 2921caused by the fact that the same symbol <tt>C</tt> appears next in 2922both the <tt>abcd</tt> and <tt>abcx</tt> rules. The parser can either 2923shift the symbol (<tt>abcd</tt> rule) or reduce the empty 2924rule <tt>seen_AB</tt> (<tt>abcx</tt> rule). 2925 2926<p> 2927A common use of embedded rules is to control other aspects of parsing 2928such as scoping of local variables. For example, if you were parsing C code, you might 2929write code like this: 2930 2931<blockquote> 2932<pre> 2933def p_statements_block(p): 2934 "statements: LBRACE new_scope statements RBRACE""" 2935 # Action code 2936 ... 2937 pop_scope() # Return to previous scope 2938 2939def p_new_scope(p): 2940 "new_scope :" 2941 # Create a new scope for local variables 2942 s = new_scope() 2943 push_scope(s) 2944 ... 2945</pre> 2946</blockquote> 2947 2948In this case, the embedded action <tt>new_scope</tt> executes 2949immediately after a <tt>LBRACE</tt> (<tt>{</tt>) symbol is parsed. 2950This might adjust internal symbol tables and other aspects of the 2951parser. Upon completion of the rule <tt>statements_block</tt>, code 2952might undo the operations performed in the embedded action 2953(e.g., <tt>pop_scope()</tt>). 2954 2955<H3><a name="ply_nn36"></a>6.12 Miscellaneous Yacc Notes</H3> 2956 2957 2958<ul> 2959<li>The default parsing method is LALR. To use SLR instead, run yacc() as follows: 2960 2961<blockquote> 2962<pre> 2963yacc.yacc(method="SLR") 2964</pre> 2965</blockquote> 2966Note: LALR table generation takes approximately twice as long as SLR table generation. There is no 2967difference in actual parsing performance---the same code is used in both cases. LALR is preferred when working 2968with more complicated grammars since it is more powerful. 2969 2970<p> 2971 2972<li>By default, <tt>yacc.py</tt> relies on <tt>lex.py</tt> for tokenizing. However, an alternative tokenizer 2973can be supplied as follows: 2974 2975<blockquote> 2976<pre> 2977yacc.parse(lexer=x) 2978</pre> 2979</blockquote> 2980in this case, <tt>x</tt> must be a Lexer object that minimally has a <tt>x.token()</tt> method for retrieving the next 2981token. If an input string is given to <tt>yacc.parse()</tt>, the lexer must also have an <tt>x.input()</tt> method. 2982 2983<p> 2984<li>By default, the yacc generates tables in debugging mode (which produces the parser.out file and other output). 2985To disable this, use 2986 2987<blockquote> 2988<pre> 2989yacc.yacc(debug=0) 2990</pre> 2991</blockquote> 2992 2993<p> 2994<li>To change the name of the <tt>parsetab.py</tt> file, use: 2995 2996<blockquote> 2997<pre> 2998yacc.yacc(tabmodule="foo") 2999</pre> 3000</blockquote> 3001 3002<p> 3003<li>To change the directory in which the <tt>parsetab.py</tt> file (and other output files) are written, use: 3004<blockquote> 3005<pre> 3006yacc.yacc(tabmodule="foo",outputdir="somedirectory") 3007</pre> 3008</blockquote> 3009 3010<p> 3011<li>To prevent yacc from generating any kind of parser table file, use: 3012<blockquote> 3013<pre> 3014yacc.yacc(write_tables=0) 3015</pre> 3016</blockquote> 3017 3018Note: If you disable table generation, yacc() will regenerate the parsing tables 3019each time it runs (which may take awhile depending on how large your grammar is). 3020 3021<P> 3022<li>To print copious amounts of debugging during parsing, use: 3023 3024<blockquote> 3025<pre> 3026yacc.parse(debug=1) 3027</pre> 3028</blockquote> 3029 3030<p> 3031<li>The <tt>yacc.yacc()</tt> function really returns a parser object. If you want to support multiple 3032parsers in the same application, do this: 3033 3034<blockquote> 3035<pre> 3036p = yacc.yacc() 3037... 3038p.parse() 3039</pre> 3040</blockquote> 3041 3042Note: The function <tt>yacc.parse()</tt> is bound to the last parser that was generated. 3043 3044<p> 3045<li>Since the generation of the LALR tables is relatively expensive, previously generated tables are 3046cached and reused if possible. The decision to regenerate the tables is determined by taking an MD5 3047checksum of all grammar rules and precedence rules. Only in the event of a mismatch are the tables regenerated. 3048 3049<p> 3050It should be noted that table generation is reasonably efficient, even for grammars that involve around a 100 rules 3051and several hundred states. For more complex languages such as C, table generation may take 30-60 seconds on a slow 3052machine. Please be patient. 3053 3054<p> 3055<li>Since LR parsing is driven by tables, the performance of the parser is largely independent of the 3056size of the grammar. The biggest bottlenecks will be the lexer and the complexity of the code in your grammar rules. 3057</ul> 3058 3059<H2><a name="ply_nn37"></a>7. Multiple Parsers and Lexers</H2> 3060 3061 3062In advanced parsing applications, you may want to have multiple 3063parsers and lexers. 3064 3065<p> 3066As a general rules this isn't a problem. However, to make it work, 3067you need to carefully make sure everything gets hooked up correctly. 3068First, make sure you save the objects returned by <tt>lex()</tt> and 3069<tt>yacc()</tt>. For example: 3070 3071<blockquote> 3072<pre> 3073lexer = lex.lex() # Return lexer object 3074parser = yacc.yacc() # Return parser object 3075</pre> 3076</blockquote> 3077 3078Next, when parsing, make sure you give the <tt>parse()</tt> function a reference to the lexer it 3079should be using. For example: 3080 3081<blockquote> 3082<pre> 3083parser.parse(text,lexer=lexer) 3084</pre> 3085</blockquote> 3086 3087If you forget to do this, the parser will use the last lexer 3088created--which is not always what you want. 3089 3090<p> 3091Within lexer and parser rule functions, these objects are also 3092available. In the lexer, the "lexer" attribute of a token refers to 3093the lexer object that triggered the rule. For example: 3094 3095<blockquote> 3096<pre> 3097def t_NUMBER(t): 3098 r'\d+' 3099 ... 3100 print t.lexer # Show lexer object 3101</pre> 3102</blockquote> 3103 3104In the parser, the "lexer" and "parser" attributes refer to the lexer 3105and parser objects respectively. 3106 3107<blockquote> 3108<pre> 3109def p_expr_plus(p): 3110 'expr : expr PLUS expr' 3111 ... 3112 print p.parser # Show parser object 3113 print p.lexer # Show lexer object 3114</pre> 3115</blockquote> 3116 3117If necessary, arbitrary attributes can be attached to the lexer or parser object. 3118For example, if you wanted to have different parsing modes, you could attach a mode 3119attribute to the parser object and look at it later. 3120 3121<H2><a name="ply_nn38"></a>8. Using Python's Optimized Mode</H2> 3122 3123 3124Because PLY uses information from doc-strings, parsing and lexing 3125information must be gathered while running the Python interpreter in 3126normal mode (i.e., not with the -O or -OO options). However, if you 3127specify optimized mode like this: 3128 3129<blockquote> 3130<pre> 3131lex.lex(optimize=1) 3132yacc.yacc(optimize=1) 3133</pre> 3134</blockquote> 3135 3136then PLY can later be used when Python runs in optimized mode. To make this work, 3137make sure you first run Python in normal mode. Once the lexing and parsing tables 3138have been generated the first time, run Python in optimized mode. PLY will use 3139the tables without the need for doc strings. 3140 3141<p> 3142Beware: running PLY in optimized mode disables a lot of error 3143checking. You should only do this when your project has stabilized 3144and you don't need to do any debugging. One of the purposes of 3145optimized mode is to substantially decrease the startup time of 3146your compiler (by assuming that everything is already properly 3147specified and works). 3148 3149<H2><a name="ply_nn44"></a>9. Advanced Debugging</H2> 3150 3151 3152<p> 3153Debugging a compiler is typically not an easy task. PLY provides some 3154advanced diagonistic capabilities through the use of Python's 3155<tt>logging</tt> module. The next two sections describe this: 3156 3157<H3><a name="ply_nn45"></a>9.1 Debugging the lex() and yacc() commands</H3> 3158 3159 3160<p> 3161Both the <tt>lex()</tt> and <tt>yacc()</tt> commands have a debugging 3162mode that can be enabled using the <tt>debug</tt> flag. For example: 3163 3164<blockquote> 3165<pre> 3166lex.lex(debug=True) 3167yacc.yacc(debug=True) 3168</pre> 3169</blockquote> 3170 3171Normally, the output produced by debugging is routed to either 3172standard error or, in the case of <tt>yacc()</tt>, to a file 3173<tt>parser.out</tt>. This output can be more carefully controlled 3174by supplying a logging object. Here is an example that adds 3175information about where different debugging messages are coming from: 3176 3177<blockquote> 3178<pre> 3179# Set up a logging object 3180import logging 3181logging.basicConfig( 3182 level = logging.DEBUG, 3183 filename = "parselog.txt", 3184 filemode = "w", 3185 format = "%(filename)10s:%(lineno)4d:%(message)s" 3186) 3187log = logging.getLogger() 3188 3189lex.lex(debug=True,debuglog=log) 3190yacc.yacc(debug=True,debuglog=log) 3191</pre> 3192</blockquote> 3193 3194If you supply a custom logger, the amount of debugging 3195information produced can be controlled by setting the logging level. 3196Typically, debugging messages are either issued at the <tt>DEBUG</tt>, 3197<tt>INFO</tt>, or <tt>WARNING</tt> levels. 3198 3199<p> 3200PLY's error messages and warnings are also produced using the logging 3201interface. This can be controlled by passing a logging object 3202using the <tt>errorlog</tt> parameter. 3203 3204<blockquote> 3205<pre> 3206lex.lex(errorlog=log) 3207yacc.yacc(errorlog=log) 3208</pre> 3209</blockquote> 3210 3211If you want to completely silence warnings, you can either pass in a 3212logging object with an appropriate filter level or use the <tt>NullLogger</tt> 3213object defined in either <tt>lex</tt> or <tt>yacc</tt>. For example: 3214 3215<blockquote> 3216<pre> 3217yacc.yacc(errorlog=yacc.NullLogger()) 3218</pre> 3219</blockquote> 3220 3221<H3><a name="ply_nn46"></a>9.2 Run-time Debugging</H3> 3222 3223 3224<p> 3225To enable run-time debugging of a parser, use the <tt>debug</tt> option to parse. This 3226option can either be an integer (which simply turns debugging on or off) or an instance 3227of a logger object. For example: 3228 3229<blockquote> 3230<pre> 3231log = logging.getLogger() 3232parser.parse(input,debug=log) 3233</pre> 3234</blockquote> 3235 3236If a logging object is passed, you can use its filtering level to control how much 3237output gets generated. The <tt>INFO</tt> level is used to produce information 3238about rule reductions. The <tt>DEBUG</tt> level will show information about the 3239parsing stack, token shifts, and other details. The <tt>ERROR</tt> level shows information 3240related to parsing errors. 3241 3242<p> 3243For very complicated problems, you should pass in a logging object that 3244redirects to a file where you can more easily inspect the output after 3245execution. 3246 3247<H2><a name="ply_nn39"></a>10. Where to go from here?</H2> 3248 3249 3250The <tt>examples</tt> directory of the PLY distribution contains several simple examples. Please consult a 3251compilers textbook for the theory and underlying implementation details or LR parsing. 3252 3253</body> 3254</html> 3255 3256 3257 3258 3259 3260 3261 3262