12632Sstever@eecs.umich.edu<html>
22632Sstever@eecs.umich.edu<head>
32632Sstever@eecs.umich.edu<title>PLY (Python Lex-Yacc)</title>
42632Sstever@eecs.umich.edu</head>
52632Sstever@eecs.umich.edu<body bgcolor="#ffffff">
62632Sstever@eecs.umich.edu
72632Sstever@eecs.umich.edu<h1>PLY (Python Lex-Yacc)</h1>
84479Sbinkertn@umich.edu 
92632Sstever@eecs.umich.edu<b>
102632Sstever@eecs.umich.eduDavid M. Beazley <br>
114479Sbinkertn@umich.edudave@dabeaz.com<br>
122632Sstever@eecs.umich.edu</b>
132632Sstever@eecs.umich.edu
142632Sstever@eecs.umich.edu<p>
156498Snate@binkert.org<b>PLY Version: 3.0</b>
162632Sstever@eecs.umich.edu<p>
174479Sbinkertn@umich.edu
184479Sbinkertn@umich.edu<!-- INDEX -->
194479Sbinkertn@umich.edu<div class="sectiontoc">
204479Sbinkertn@umich.edu<ul>
216498Snate@binkert.org<li><a href="#ply_nn1">Preface and Requirements</a>
224479Sbinkertn@umich.edu<li><a href="#ply_nn1">Introduction</a>
234479Sbinkertn@umich.edu<li><a href="#ply_nn2">PLY Overview</a>
244479Sbinkertn@umich.edu<li><a href="#ply_nn3">Lex</a>
254479Sbinkertn@umich.edu<ul>
264479Sbinkertn@umich.edu<li><a href="#ply_nn4">Lex Example</a>
274479Sbinkertn@umich.edu<li><a href="#ply_nn5">The tokens list</a>
284479Sbinkertn@umich.edu<li><a href="#ply_nn6">Specification of tokens</a>
294479Sbinkertn@umich.edu<li><a href="#ply_nn7">Token values</a>
304479Sbinkertn@umich.edu<li><a href="#ply_nn8">Discarded tokens</a>
314479Sbinkertn@umich.edu<li><a href="#ply_nn9">Line numbers and positional information</a>
324479Sbinkertn@umich.edu<li><a href="#ply_nn10">Ignored characters</a>
334479Sbinkertn@umich.edu<li><a href="#ply_nn11">Literal characters</a>
344479Sbinkertn@umich.edu<li><a href="#ply_nn12">Error handling</a>
354479Sbinkertn@umich.edu<li><a href="#ply_nn13">Building and using the lexer</a>
364479Sbinkertn@umich.edu<li><a href="#ply_nn14">The @TOKEN decorator</a>
374479Sbinkertn@umich.edu<li><a href="#ply_nn15">Optimized mode</a>
384479Sbinkertn@umich.edu<li><a href="#ply_nn16">Debugging</a>
394479Sbinkertn@umich.edu<li><a href="#ply_nn17">Alternative specification of lexers</a>
404479Sbinkertn@umich.edu<li><a href="#ply_nn18">Maintaining state</a>
416498Snate@binkert.org<li><a href="#ply_nn19">Lexer cloning</a>
424479Sbinkertn@umich.edu<li><a href="#ply_nn20">Internal lexer state</a>
434479Sbinkertn@umich.edu<li><a href="#ply_nn21">Conditional lexing and start conditions</a>
444479Sbinkertn@umich.edu<li><a href="#ply_nn21">Miscellaneous Issues</a>
454479Sbinkertn@umich.edu</ul>
464479Sbinkertn@umich.edu<li><a href="#ply_nn22">Parsing basics</a>
476498Snate@binkert.org<li><a href="#ply_nn23">Yacc</a>
484479Sbinkertn@umich.edu<ul>
494479Sbinkertn@umich.edu<li><a href="#ply_nn24">An example</a>
504479Sbinkertn@umich.edu<li><a href="#ply_nn25">Combining Grammar Rule Functions</a>
514479Sbinkertn@umich.edu<li><a href="#ply_nn26">Character Literals</a>
524479Sbinkertn@umich.edu<li><a href="#ply_nn26">Empty Productions</a>
534479Sbinkertn@umich.edu<li><a href="#ply_nn28">Changing the starting symbol</a>
544479Sbinkertn@umich.edu<li><a href="#ply_nn27">Dealing With Ambiguous Grammars</a>
554479Sbinkertn@umich.edu<li><a href="#ply_nn28">The parser.out file</a>
564479Sbinkertn@umich.edu<li><a href="#ply_nn29">Syntax Error Handling</a>
574479Sbinkertn@umich.edu<ul>
584479Sbinkertn@umich.edu<li><a href="#ply_nn30">Recovery and resynchronization with error rules</a>
594479Sbinkertn@umich.edu<li><a href="#ply_nn31">Panic mode recovery</a>
606498Snate@binkert.org<li><a href="#ply_nn35">Signaling an error from a production</a>
614479Sbinkertn@umich.edu<li><a href="#ply_nn32">General comments on error handling</a>
624479Sbinkertn@umich.edu</ul>
634479Sbinkertn@umich.edu<li><a href="#ply_nn33">Line Number and Position Tracking</a>
644479Sbinkertn@umich.edu<li><a href="#ply_nn34">AST Construction</a>
654479Sbinkertn@umich.edu<li><a href="#ply_nn35">Embedded Actions</a>
666498Snate@binkert.org<li><a href="#ply_nn36">Miscellaneous Yacc Notes</a>
674479Sbinkertn@umich.edu</ul>
686498Snate@binkert.org<li><a href="#ply_nn37">Multiple Parsers and Lexers</a>
694479Sbinkertn@umich.edu<li><a href="#ply_nn38">Using Python's Optimized Mode</a>
706498Snate@binkert.org<li><a href="#ply_nn44">Advanced Debugging</a>
716498Snate@binkert.org<ul>
726498Snate@binkert.org<li><a href="#ply_nn45">Debugging the lex() and yacc() commands</a>
736498Snate@binkert.org<li><a href="#ply_nn46">Run-time Debugging</a>
746498Snate@binkert.org</ul>
754479Sbinkertn@umich.edu<li><a href="#ply_nn39">Where to go from here?</a>
764479Sbinkertn@umich.edu</ul>
774479Sbinkertn@umich.edu</div>
784479Sbinkertn@umich.edu<!-- INDEX -->
794479Sbinkertn@umich.edu
804479Sbinkertn@umich.edu
814479Sbinkertn@umich.edu
826498Snate@binkert.org<H2><a name="ply_nn1"></a>1. Preface and Requirements</H2>
836498Snate@binkert.org
846498Snate@binkert.org
856498Snate@binkert.org<p>
866498Snate@binkert.orgThis document provides an overview of lexing and parsing with PLY.
876498Snate@binkert.orgGiven the intrinsic complexity of parsing, I would strongly advise 
886498Snate@binkert.orgthat you read (or at least skim) this entire document before jumping
896498Snate@binkert.orginto a big development project with PLY.  
906498Snate@binkert.org</p>
916498Snate@binkert.org
926498Snate@binkert.org<p>
936498Snate@binkert.orgPLY-3.0 is compatible with both Python 2 and Python 3.  Be aware that
946498Snate@binkert.orgPython 3 support is new and has not been extensively tested (although
956498Snate@binkert.orgall of the examples and unit tests pass under Python 3.0).   If you are
966498Snate@binkert.orgusing Python 2, you should try to use Python 2.4 or newer.  Although PLY
976498Snate@binkert.orgworks with versions as far back as Python 2.2, some of its optional features
986498Snate@binkert.orgrequire more modern library modules.
996498Snate@binkert.org</p>
1006498Snate@binkert.org
1016498Snate@binkert.org<H2><a name="ply_nn1"></a>2. Introduction</H2>
1024479Sbinkertn@umich.edu
1034479Sbinkertn@umich.edu
1044479Sbinkertn@umich.eduPLY is a pure-Python implementation of the popular compiler
1054479Sbinkertn@umich.educonstruction tools lex and yacc. The main goal of PLY is to stay
1064479Sbinkertn@umich.edufairly faithful to the way in which traditional lex/yacc tools work.
1074479Sbinkertn@umich.eduThis includes supporting LALR(1) parsing as well as providing
1084479Sbinkertn@umich.eduextensive input validation, error reporting, and diagnostics.  Thus,
1094479Sbinkertn@umich.eduif you've used yacc in another programming language, it should be
1104479Sbinkertn@umich.edurelatively straightforward to use PLY.  
1114479Sbinkertn@umich.edu
1124479Sbinkertn@umich.edu<p>
1134479Sbinkertn@umich.eduEarly versions of PLY were developed to support an Introduction to
1144479Sbinkertn@umich.eduCompilers Course I taught in 2001 at the University of Chicago.  In this course,
1152632Sstever@eecs.umich.edustudents built a fully functional compiler for a simple Pascal-like
1162632Sstever@eecs.umich.edulanguage.  Their compiler, implemented entirely in Python, had to
1172632Sstever@eecs.umich.eduinclude lexical analysis, parsing, type checking, type inference,
1182632Sstever@eecs.umich.edunested scoping, and code generation for the SPARC processor.
1192632Sstever@eecs.umich.eduApproximately 30 different compiler implementations were completed in
1204479Sbinkertn@umich.eduthis course.  Most of PLY's interface and operation has been influenced by common
1216498Snate@binkert.orgusability problems encountered by students.   Since 2001, PLY has
1226498Snate@binkert.orgcontinued to be improved as feedback has been received from users.
1236498Snate@binkert.orgPLY-3.0 represents a major refactoring of the original implementation
1246498Snate@binkert.orgwith an eye towards future enhancements.
1252632Sstever@eecs.umich.edu
1262632Sstever@eecs.umich.edu<p>
1274479Sbinkertn@umich.eduSince PLY was primarily developed as an instructional tool, you will
1284479Sbinkertn@umich.edufind it to be fairly picky about token and grammar rule
1294479Sbinkertn@umich.eduspecification. In part, this
1302632Sstever@eecs.umich.eduadded formality is meant to catch common programming mistakes made by
1312632Sstever@eecs.umich.edunovice users.  However, advanced users will also find such features to
1322632Sstever@eecs.umich.edube useful when building complicated grammars for real programming
1334479Sbinkertn@umich.edulanguages.  It should also be noted that PLY does not provide much in
1344479Sbinkertn@umich.eduthe way of bells and whistles (e.g., automatic construction of
1354479Sbinkertn@umich.eduabstract syntax trees, tree traversal, etc.). Nor would I consider it
1364479Sbinkertn@umich.eduto be a parsing framework.  Instead, you will find a bare-bones, yet
1372632Sstever@eecs.umich.edufully capable lex/yacc implementation written entirely in Python.
1382632Sstever@eecs.umich.edu
1392632Sstever@eecs.umich.edu<p>
1402632Sstever@eecs.umich.eduThe rest of this document assumes that you are somewhat familar with
1414479Sbinkertn@umich.eduparsing theory, syntax directed translation, and the use of compiler
1424479Sbinkertn@umich.educonstruction tools such as lex and yacc in other programming
1434479Sbinkertn@umich.edulanguages. If you are unfamilar with these topics, you will probably
1444479Sbinkertn@umich.eduwant to consult an introductory text such as "Compilers: Principles,
1454479Sbinkertn@umich.eduTechniques, and Tools", by Aho, Sethi, and Ullman.  O'Reilly's "Lex
1464479Sbinkertn@umich.eduand Yacc" by John Levine may also be handy.  In fact, the O'Reilly book can be
1474479Sbinkertn@umich.eduused as a reference for PLY as the concepts are virtually identical.
1484479Sbinkertn@umich.edu
1496498Snate@binkert.org<H2><a name="ply_nn2"></a>3. PLY Overview</H2>
1504479Sbinkertn@umich.edu
1514479Sbinkertn@umich.edu
1524479Sbinkertn@umich.eduPLY consists of two separate modules; <tt>lex.py</tt> and
1534479Sbinkertn@umich.edu<tt>yacc.py</tt>, both of which are found in a Python package
1544479Sbinkertn@umich.educalled <tt>ply</tt>. The <tt>lex.py</tt> module is used to break input text into a
1552632Sstever@eecs.umich.educollection of tokens specified by a collection of regular expression
1562632Sstever@eecs.umich.edurules.  <tt>yacc.py</tt> is used to recognize language syntax that has
1574479Sbinkertn@umich.edubeen specified in the form of a context free grammar. <tt>yacc.py</tt> uses LR parsing and generates its parsing tables
1584479Sbinkertn@umich.eduusing either the LALR(1) (the default) or SLR table generation algorithms.
1592632Sstever@eecs.umich.edu
1602632Sstever@eecs.umich.edu<p>
1612632Sstever@eecs.umich.eduThe two tools are meant to work together.  Specifically,
1622632Sstever@eecs.umich.edu<tt>lex.py</tt> provides an external interface in the form of a
1632632Sstever@eecs.umich.edu<tt>token()</tt> function that returns the next valid token on the
1642632Sstever@eecs.umich.eduinput stream.  <tt>yacc.py</tt> calls this repeatedly to retrieve
1652632Sstever@eecs.umich.edutokens and invoke grammar rules.  The output of <tt>yacc.py</tt> is
1662632Sstever@eecs.umich.eduoften an Abstract Syntax Tree (AST).  However, this is entirely up to
1672632Sstever@eecs.umich.eduthe user.  If desired, <tt>yacc.py</tt> can also be used to implement
1684479Sbinkertn@umich.edusimple one-pass compilers.  
1692632Sstever@eecs.umich.edu
1702632Sstever@eecs.umich.edu<p>
1712632Sstever@eecs.umich.eduLike its Unix counterpart, <tt>yacc.py</tt> provides most of the
1722632Sstever@eecs.umich.edufeatures you expect including extensive error checking, grammar
1732632Sstever@eecs.umich.eduvalidation, support for empty productions, error tokens, and ambiguity
1744479Sbinkertn@umich.eduresolution via precedence rules.  In fact, everything that is possible in traditional yacc 
1754479Sbinkertn@umich.edushould be supported in PLY.
1762632Sstever@eecs.umich.edu
1772632Sstever@eecs.umich.edu<p>
1784479Sbinkertn@umich.eduThe primary difference between
1794479Sbinkertn@umich.edu<tt>yacc.py</tt> and Unix <tt>yacc</tt> is that <tt>yacc.py</tt> 
1804479Sbinkertn@umich.edudoesn't involve a separate code-generation process. 
1814479Sbinkertn@umich.eduInstead, PLY relies on reflection (introspection)
1824479Sbinkertn@umich.eduto build its lexers and parsers.  Unlike traditional lex/yacc which
1834479Sbinkertn@umich.edurequire a special input file that is converted into a separate source
1844479Sbinkertn@umich.edufile, the specifications given to PLY <em>are</em> valid Python
1854479Sbinkertn@umich.eduprograms.  This means that there are no extra source files nor is
1864479Sbinkertn@umich.eduthere a special compiler construction step (e.g., running yacc to
1874479Sbinkertn@umich.edugenerate Python code for the compiler).  Since the generation of the
1884479Sbinkertn@umich.eduparsing tables is relatively expensive, PLY caches the results and
1894479Sbinkertn@umich.edusaves them to a file.  If no changes are detected in the input source,
1904479Sbinkertn@umich.eduthe tables are read from the cache. Otherwise, they are regenerated.
1914479Sbinkertn@umich.edu
1926498Snate@binkert.org<H2><a name="ply_nn3"></a>4. Lex</H2>
1934479Sbinkertn@umich.edu
1944479Sbinkertn@umich.edu
1954479Sbinkertn@umich.edu<tt>lex.py</tt> is used to tokenize an input string.  For example, suppose
1964479Sbinkertn@umich.eduyou're writing a programming language and a user supplied the following input string:
1974479Sbinkertn@umich.edu
1984479Sbinkertn@umich.edu<blockquote>
1994479Sbinkertn@umich.edu<pre>
2004479Sbinkertn@umich.edux = 3 + 42 * (s - t)
2014479Sbinkertn@umich.edu</pre>
2024479Sbinkertn@umich.edu</blockquote>
2034479Sbinkertn@umich.edu
2044479Sbinkertn@umich.eduA tokenizer splits the string into individual tokens
2054479Sbinkertn@umich.edu
2064479Sbinkertn@umich.edu<blockquote>
2074479Sbinkertn@umich.edu<pre>
2084479Sbinkertn@umich.edu'x','=', '3', '+', '42', '*', '(', 's', '-', 't', ')'
2094479Sbinkertn@umich.edu</pre>
2104479Sbinkertn@umich.edu</blockquote>
2114479Sbinkertn@umich.edu
2124479Sbinkertn@umich.eduTokens are usually given names to indicate what they are. For example:
2134479Sbinkertn@umich.edu
2144479Sbinkertn@umich.edu<blockquote>
2154479Sbinkertn@umich.edu<pre>
2164479Sbinkertn@umich.edu'ID','EQUALS','NUMBER','PLUS','NUMBER','TIMES',
2174479Sbinkertn@umich.edu'LPAREN','ID','MINUS','ID','RPAREN'
2184479Sbinkertn@umich.edu</pre>
2194479Sbinkertn@umich.edu</blockquote>
2204479Sbinkertn@umich.edu
2214479Sbinkertn@umich.eduMore specifically, the input is broken into pairs of token types and values.  For example:
2224479Sbinkertn@umich.edu
2234479Sbinkertn@umich.edu<blockquote>
2244479Sbinkertn@umich.edu<pre>
2254479Sbinkertn@umich.edu('ID','x'), ('EQUALS','='), ('NUMBER','3'), 
2264479Sbinkertn@umich.edu('PLUS','+'), ('NUMBER','42), ('TIMES','*'),
2274479Sbinkertn@umich.edu('LPAREN','('), ('ID','s'), ('MINUS','-'),
2284479Sbinkertn@umich.edu('ID','t'), ('RPAREN',')'
2294479Sbinkertn@umich.edu</pre>
2304479Sbinkertn@umich.edu</blockquote>
2314479Sbinkertn@umich.edu
2324479Sbinkertn@umich.eduThe identification of tokens is typically done by writing a series of regular expression
2334479Sbinkertn@umich.edurules.  The next section shows how this is done using <tt>lex.py</tt>.
2344479Sbinkertn@umich.edu
2356498Snate@binkert.org<H3><a name="ply_nn4"></a>4.1 Lex Example</H3>
2364479Sbinkertn@umich.edu
2374479Sbinkertn@umich.edu
2384479Sbinkertn@umich.eduThe following example shows how <tt>lex.py</tt> is used to write a simple tokenizer.
2392632Sstever@eecs.umich.edu
2402632Sstever@eecs.umich.edu<blockquote>
2412632Sstever@eecs.umich.edu<pre>
2422632Sstever@eecs.umich.edu# ------------------------------------------------------------
2432632Sstever@eecs.umich.edu# calclex.py
2442632Sstever@eecs.umich.edu#
2452632Sstever@eecs.umich.edu# tokenizer for a simple expression evaluator for
2462632Sstever@eecs.umich.edu# numbers and +,-,*,/
2472632Sstever@eecs.umich.edu# ------------------------------------------------------------
2484479Sbinkertn@umich.eduimport ply.lex as lex
2492632Sstever@eecs.umich.edu
2502632Sstever@eecs.umich.edu# List of token names.   This is always required
2512632Sstever@eecs.umich.edutokens = (
2522632Sstever@eecs.umich.edu   'NUMBER',
2532632Sstever@eecs.umich.edu   'PLUS',
2542632Sstever@eecs.umich.edu   'MINUS',
2552632Sstever@eecs.umich.edu   'TIMES',
2562632Sstever@eecs.umich.edu   'DIVIDE',
2572632Sstever@eecs.umich.edu   'LPAREN',
2582632Sstever@eecs.umich.edu   'RPAREN',
2592632Sstever@eecs.umich.edu)
2602632Sstever@eecs.umich.edu
2612632Sstever@eecs.umich.edu# Regular expression rules for simple tokens
2622632Sstever@eecs.umich.edut_PLUS    = r'\+'
2632632Sstever@eecs.umich.edut_MINUS   = r'-'
2642632Sstever@eecs.umich.edut_TIMES   = r'\*'
2652632Sstever@eecs.umich.edut_DIVIDE  = r'/'
2662632Sstever@eecs.umich.edut_LPAREN  = r'\('
2672632Sstever@eecs.umich.edut_RPAREN  = r'\)'
2682632Sstever@eecs.umich.edu
2692632Sstever@eecs.umich.edu# A regular expression rule with some action code
2702632Sstever@eecs.umich.edudef t_NUMBER(t):
2712632Sstever@eecs.umich.edu    r'\d+'
2726498Snate@binkert.org    t.value = int(t.value)    
2732632Sstever@eecs.umich.edu    return t
2742632Sstever@eecs.umich.edu
2752632Sstever@eecs.umich.edu# Define a rule so we can track line numbers
2762632Sstever@eecs.umich.edudef t_newline(t):
2772632Sstever@eecs.umich.edu    r'\n+'
2784479Sbinkertn@umich.edu    t.lexer.lineno += len(t.value)
2792632Sstever@eecs.umich.edu
2802632Sstever@eecs.umich.edu# A string containing ignored characters (spaces and tabs)
2812632Sstever@eecs.umich.edut_ignore  = ' \t'
2822632Sstever@eecs.umich.edu
2832632Sstever@eecs.umich.edu# Error handling rule
2842632Sstever@eecs.umich.edudef t_error(t):
2852632Sstever@eecs.umich.edu    print "Illegal character '%s'" % t.value[0]
2864479Sbinkertn@umich.edu    t.lexer.skip(1)
2872632Sstever@eecs.umich.edu
2882632Sstever@eecs.umich.edu# Build the lexer
2896498Snate@binkert.orglexer = lex.lex()
2902632Sstever@eecs.umich.edu
2914479Sbinkertn@umich.edu</pre>
2924479Sbinkertn@umich.edu</blockquote>
2936498Snate@binkert.orgTo use the lexer, you first need to feed it some input text using
2946498Snate@binkert.orgits <tt>input()</tt> method.  After that, repeated calls
2956498Snate@binkert.orgto <tt>token()</tt> produce tokens.  The following code shows how this
2966498Snate@binkert.orgworks:
2974479Sbinkertn@umich.edu
2984479Sbinkertn@umich.edu<blockquote>
2994479Sbinkertn@umich.edu<pre>
3004479Sbinkertn@umich.edu
3012632Sstever@eecs.umich.edu# Test it out
3022632Sstever@eecs.umich.edudata = '''
3032632Sstever@eecs.umich.edu3 + 4 * 10
3042632Sstever@eecs.umich.edu  + -20 *2
3052632Sstever@eecs.umich.edu'''
3062632Sstever@eecs.umich.edu
3072632Sstever@eecs.umich.edu# Give the lexer some input
3086498Snate@binkert.orglexer.input(data)
3092632Sstever@eecs.umich.edu
3102632Sstever@eecs.umich.edu# Tokenize
3116498Snate@binkert.orgwhile True:
3126498Snate@binkert.org    tok = lexer.token()
3132632Sstever@eecs.umich.edu    if not tok: break      # No more input
3142632Sstever@eecs.umich.edu    print tok
3152632Sstever@eecs.umich.edu</pre>
3162632Sstever@eecs.umich.edu</blockquote>
3172632Sstever@eecs.umich.edu
3184479Sbinkertn@umich.eduWhen executed, the example will produce the following output:
3194479Sbinkertn@umich.edu
3204479Sbinkertn@umich.edu<blockquote>
3214479Sbinkertn@umich.edu<pre>
3224479Sbinkertn@umich.edu$ python example.py
3234479Sbinkertn@umich.eduLexToken(NUMBER,3,2,1)
3244479Sbinkertn@umich.eduLexToken(PLUS,'+',2,3)
3254479Sbinkertn@umich.eduLexToken(NUMBER,4,2,5)
3264479Sbinkertn@umich.eduLexToken(TIMES,'*',2,7)
3274479Sbinkertn@umich.eduLexToken(NUMBER,10,2,10)
3284479Sbinkertn@umich.eduLexToken(PLUS,'+',3,14)
3294479Sbinkertn@umich.eduLexToken(MINUS,'-',3,16)
3304479Sbinkertn@umich.eduLexToken(NUMBER,20,3,18)
3314479Sbinkertn@umich.eduLexToken(TIMES,'*',3,20)
3324479Sbinkertn@umich.eduLexToken(NUMBER,2,3,21)
3334479Sbinkertn@umich.edu</pre>
3344479Sbinkertn@umich.edu</blockquote>
3354479Sbinkertn@umich.edu
3366498Snate@binkert.orgLexers also support the iteration protocol.    So, you can write the above loop as follows:
3376498Snate@binkert.org
3386498Snate@binkert.org<blockquote>
3396498Snate@binkert.org<pre>
3406498Snate@binkert.orgfor tok in lexer:
3416498Snate@binkert.org    print tok
3426498Snate@binkert.org</pre>
3436498Snate@binkert.org</blockquote>
3446498Snate@binkert.org
3456498Snate@binkert.orgThe tokens returned by <tt>lexer.token()</tt> are instances
3464479Sbinkertn@umich.eduof <tt>LexToken</tt>.  This object has
3474479Sbinkertn@umich.eduattributes <tt>tok.type</tt>, <tt>tok.value</tt>,
3484479Sbinkertn@umich.edu<tt>tok.lineno</tt>, and <tt>tok.lexpos</tt>.  The following code shows an example of
3494479Sbinkertn@umich.eduaccessing these attributes:
3504479Sbinkertn@umich.edu
3514479Sbinkertn@umich.edu<blockquote>
3524479Sbinkertn@umich.edu<pre>
3534479Sbinkertn@umich.edu# Tokenize
3546498Snate@binkert.orgwhile True:
3556498Snate@binkert.org    tok = lexer.token()
3564479Sbinkertn@umich.edu    if not tok: break      # No more input
3574479Sbinkertn@umich.edu    print tok.type, tok.value, tok.line, tok.lexpos
3584479Sbinkertn@umich.edu</pre>
3594479Sbinkertn@umich.edu</blockquote>
3604479Sbinkertn@umich.edu
3614479Sbinkertn@umich.eduThe <tt>tok.type</tt> and <tt>tok.value</tt> attributes contain the
3624479Sbinkertn@umich.edutype and value of the token itself. 
3634479Sbinkertn@umich.edu<tt>tok.line</tt> and <tt>tok.lexpos</tt> contain information about
3644479Sbinkertn@umich.eduthe location of the token.  <tt>tok.lexpos</tt> is the index of the
3654479Sbinkertn@umich.edutoken relative to the start of the input text.
3664479Sbinkertn@umich.edu
3676498Snate@binkert.org<H3><a name="ply_nn5"></a>4.2 The tokens list</H3>
3684479Sbinkertn@umich.edu
3694479Sbinkertn@umich.edu
3704479Sbinkertn@umich.eduAll lexers must provide a list <tt>tokens</tt> that defines all of the possible token
3714479Sbinkertn@umich.edunames that can be produced by the lexer.  This list is always required
3724479Sbinkertn@umich.eduand is used to perform a variety of validation checks.  The tokens list is also used by the
3734479Sbinkertn@umich.edu<tt>yacc.py</tt> module to identify terminals.
3744479Sbinkertn@umich.edu
3754479Sbinkertn@umich.edu<p>
3764479Sbinkertn@umich.eduIn the example, the following code specified the token names:
3774479Sbinkertn@umich.edu
3784479Sbinkertn@umich.edu<blockquote>
3794479Sbinkertn@umich.edu<pre>
3804479Sbinkertn@umich.edutokens = (
3814479Sbinkertn@umich.edu   'NUMBER',
3824479Sbinkertn@umich.edu   'PLUS',
3834479Sbinkertn@umich.edu   'MINUS',
3844479Sbinkertn@umich.edu   'TIMES',
3854479Sbinkertn@umich.edu   'DIVIDE',
3864479Sbinkertn@umich.edu   'LPAREN',
3874479Sbinkertn@umich.edu   'RPAREN',
3884479Sbinkertn@umich.edu)
3894479Sbinkertn@umich.edu</pre>
3904479Sbinkertn@umich.edu</blockquote>
3914479Sbinkertn@umich.edu
3926498Snate@binkert.org<H3><a name="ply_nn6"></a>4.3 Specification of tokens</H3>
3934479Sbinkertn@umich.edu
3944479Sbinkertn@umich.edu
3954479Sbinkertn@umich.eduEach token is specified by writing a regular expression rule.  Each of these rules are
3964479Sbinkertn@umich.eduare defined by  making declarations with a special prefix <tt>t_</tt> to indicate that it
3972632Sstever@eecs.umich.edudefines a token.  For simple tokens, the regular expression can
3982632Sstever@eecs.umich.edube specified as strings such as this (note: Python raw strings are used since they are the
3992632Sstever@eecs.umich.edumost convenient way to write regular expression strings):
4002632Sstever@eecs.umich.edu
4012632Sstever@eecs.umich.edu<blockquote>
4022632Sstever@eecs.umich.edu<pre>
4032632Sstever@eecs.umich.edut_PLUS = r'\+'
4042632Sstever@eecs.umich.edu</pre>
4052632Sstever@eecs.umich.edu</blockquote>
4062632Sstever@eecs.umich.edu
4072632Sstever@eecs.umich.eduIn this case, the name following the <tt>t_</tt> must exactly match one of the
4082632Sstever@eecs.umich.edunames supplied in <tt>tokens</tt>.   If some kind of action needs to be performed,
4094479Sbinkertn@umich.edua token rule can be specified as a function.  For example, this rule matches numbers and
4104479Sbinkertn@umich.educonverts the string into a Python integer.
4112632Sstever@eecs.umich.edu
4122632Sstever@eecs.umich.edu<blockquote>
4132632Sstever@eecs.umich.edu<pre>
4142632Sstever@eecs.umich.edudef t_NUMBER(t):
4152632Sstever@eecs.umich.edu    r'\d+'
4166498Snate@binkert.org    t.value = int(t.value)
4172632Sstever@eecs.umich.edu    return t
4182632Sstever@eecs.umich.edu</pre>
4192632Sstever@eecs.umich.edu</blockquote>
4202632Sstever@eecs.umich.edu
4214479Sbinkertn@umich.eduWhen a function is used, the regular expression rule is specified in the function documentation string. 
4222632Sstever@eecs.umich.eduThe function always takes a single argument which is an instance of 
4234479Sbinkertn@umich.edu<tt>LexToken</tt>.   This object has attributes of <tt>t.type</tt> which is the token type (as a string),
4244479Sbinkertn@umich.edu<tt>t.value</tt> which is the lexeme (the actual text matched), <tt>t.lineno</tt> which is the current line number, and <tt>t.lexpos</tt> which
4254479Sbinkertn@umich.eduis the position of the token relative to the beginning of the input text.
4262632Sstever@eecs.umich.eduBy default, <tt>t.type</tt> is set to the name following the <tt>t_</tt> prefix.  The action
4272632Sstever@eecs.umich.edufunction can modify the contents of the <tt>LexToken</tt> object as appropriate.  However, 
4282632Sstever@eecs.umich.eduwhen it is done, the resulting token should be returned.  If no value is returned by the action
4292632Sstever@eecs.umich.edufunction, the token is simply discarded and the next token read.
4302632Sstever@eecs.umich.edu
4312632Sstever@eecs.umich.edu<p>
4324479Sbinkertn@umich.eduInternally, <tt>lex.py</tt> uses the <tt>re</tt> module to do its patten matching.  When building the master regular expression,
4332632Sstever@eecs.umich.edurules are added in the following order:
4342632Sstever@eecs.umich.edu<p>
4352632Sstever@eecs.umich.edu<ol>
4362632Sstever@eecs.umich.edu<li>All tokens defined by functions are added in the same order as they appear in the lexer file.
4374479Sbinkertn@umich.edu<li>Tokens defined by strings are added next by sorting them in order of decreasing regular expression length (longer expressions
4382632Sstever@eecs.umich.eduare added first).
4392632Sstever@eecs.umich.edu</ol>
4402632Sstever@eecs.umich.edu<p>
4412632Sstever@eecs.umich.eduWithout this ordering, it can be difficult to correctly match certain types of tokens.  For example, if you 
4422632Sstever@eecs.umich.eduwanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first.  By sorting regular
4432632Sstever@eecs.umich.eduexpressions in order of decreasing length, this problem is solved for rules defined as strings.  For functions,
4442632Sstever@eecs.umich.eduthe order can be explicitly controlled since rules appearing first are checked first.
4452632Sstever@eecs.umich.edu
4462632Sstever@eecs.umich.edu<p>
4476498Snate@binkert.orgTo handle reserved words, you should write a single rule to match an
4486498Snate@binkert.orgidentifier and do a special name lookup in a function like this:
4492632Sstever@eecs.umich.edu
4502632Sstever@eecs.umich.edu<blockquote>
4512632Sstever@eecs.umich.edu<pre>
4522632Sstever@eecs.umich.edureserved = {
4532632Sstever@eecs.umich.edu   'if' : 'IF',
4542632Sstever@eecs.umich.edu   'then' : 'THEN',
4552632Sstever@eecs.umich.edu   'else' : 'ELSE',
4562632Sstever@eecs.umich.edu   'while' : 'WHILE',
4572632Sstever@eecs.umich.edu   ...
4582632Sstever@eecs.umich.edu}
4592632Sstever@eecs.umich.edu
4606498Snate@binkert.orgtokens = ['LPAREN','RPAREN',...,'ID'] + list(reserved.values())
4616498Snate@binkert.org
4622632Sstever@eecs.umich.edudef t_ID(t):
4632632Sstever@eecs.umich.edu    r'[a-zA-Z_][a-zA-Z_0-9]*'
4642632Sstever@eecs.umich.edu    t.type = reserved.get(t.value,'ID')    # Check for reserved words
4652632Sstever@eecs.umich.edu    return t
4662632Sstever@eecs.umich.edu</pre>
4672632Sstever@eecs.umich.edu</blockquote>
4682632Sstever@eecs.umich.edu
4694479Sbinkertn@umich.eduThis approach greatly reduces the number of regular expression rules and is likely to make things a little faster.
4704479Sbinkertn@umich.edu
4712632Sstever@eecs.umich.edu<p>
4724479Sbinkertn@umich.edu<b>Note:</b> You should avoid writing individual rules for reserved words.  For example, if you write rules like this,
4734479Sbinkertn@umich.edu
4744479Sbinkertn@umich.edu<blockquote>
4754479Sbinkertn@umich.edu<pre>
4764479Sbinkertn@umich.edut_FOR   = r'for'
4774479Sbinkertn@umich.edut_PRINT = r'print'
4784479Sbinkertn@umich.edu</pre>
4794479Sbinkertn@umich.edu</blockquote>
4804479Sbinkertn@umich.edu
4814479Sbinkertn@umich.eduthose rules will be triggered for identifiers that include those words as a prefix such as "forget" or "printed".  This is probably not
4824479Sbinkertn@umich.eduwhat you want.
4834479Sbinkertn@umich.edu
4846498Snate@binkert.org<H3><a name="ply_nn7"></a>4.4 Token values</H3>
4854479Sbinkertn@umich.edu
4864479Sbinkertn@umich.edu
4874479Sbinkertn@umich.eduWhen tokens are returned by lex, they have a value that is stored in the <tt>value</tt> attribute.    Normally, the value is the text
4884479Sbinkertn@umich.eduthat was matched.   However, the value can be assigned to any Python object.   For instance, when lexing identifiers, you may
4894479Sbinkertn@umich.eduwant to return both the identifier name and information from some sort of symbol table.  To do this, you might write a rule like this:
4902632Sstever@eecs.umich.edu
4912632Sstever@eecs.umich.edu<blockquote>
4922632Sstever@eecs.umich.edu<pre>
4932632Sstever@eecs.umich.edudef t_ID(t):
4942632Sstever@eecs.umich.edu    ...
4954479Sbinkertn@umich.edu    # Look up symbol table information and return a tuple
4962632Sstever@eecs.umich.edu    t.value = (t.value, symbol_lookup(t.value))
4972632Sstever@eecs.umich.edu    ...
4982632Sstever@eecs.umich.edu    return t
4992632Sstever@eecs.umich.edu</pre>
5002632Sstever@eecs.umich.edu</blockquote>
5012632Sstever@eecs.umich.edu
5024479Sbinkertn@umich.eduIt is important to note that storing data in other attribute names is <em>not</em> recommended.  The <tt>yacc.py</tt> module only exposes the
5036498Snate@binkert.orgcontents of the <tt>value</tt> attribute.  Thus, accessing other attributes may  be unnecessarily awkward.   If you
5046498Snate@binkert.orgneed to store multiple values on a token, assign a tuple, dictionary, or instance to <tt>value</tt>.
5056498Snate@binkert.org
5066498Snate@binkert.org<H3><a name="ply_nn8"></a>4.5 Discarded tokens</H3>
5074479Sbinkertn@umich.edu
5084479Sbinkertn@umich.edu
5094479Sbinkertn@umich.eduTo discard a token, such as a comment, simply define a token rule that returns no value.  For example:
5104479Sbinkertn@umich.edu
5112632Sstever@eecs.umich.edu<blockquote>
5122632Sstever@eecs.umich.edu<pre>
5134479Sbinkertn@umich.edudef t_COMMENT(t):
5144479Sbinkertn@umich.edu    r'\#.*'
5154479Sbinkertn@umich.edu    pass
5164479Sbinkertn@umich.edu    # No return value. Token discarded
5172632Sstever@eecs.umich.edu</pre>
5182632Sstever@eecs.umich.edu</blockquote>
5192632Sstever@eecs.umich.edu
5204479Sbinkertn@umich.eduAlternatively, you can include the prefix "ignore_" in the token declaration to force a token to be ignored.  For example:
5214479Sbinkertn@umich.edu
5224479Sbinkertn@umich.edu<blockquote>
5234479Sbinkertn@umich.edu<pre>
5244479Sbinkertn@umich.edut_ignore_COMMENT = r'\#.*'
5254479Sbinkertn@umich.edu</pre>
5264479Sbinkertn@umich.edu</blockquote>
5274479Sbinkertn@umich.edu
5284479Sbinkertn@umich.eduBe advised that if you are ignoring many different kinds of text, you may still want to use functions since these provide more precise
5294479Sbinkertn@umich.educontrol over the order in which regular expressions are matched (i.e., functions are matched in order of specification whereas strings are
5304479Sbinkertn@umich.edusorted by regular expression length).
5314479Sbinkertn@umich.edu
5326498Snate@binkert.org<H3><a name="ply_nn9"></a>4.6 Line numbers and positional information</H3>
5334479Sbinkertn@umich.edu
5344479Sbinkertn@umich.edu
5354479Sbinkertn@umich.edu<p>By default, <tt>lex.py</tt> knows nothing about line numbers.  This is because <tt>lex.py</tt> doesn't know anything
5364479Sbinkertn@umich.eduabout what constitutes a "line" of input (e.g., the newline character or even if the input is textual data).
5374479Sbinkertn@umich.eduTo update this information, you need to write a special rule.  In the example, the <tt>t_newline()</tt> rule shows how to do this.
5384479Sbinkertn@umich.edu
5394479Sbinkertn@umich.edu<blockquote>
5404479Sbinkertn@umich.edu<pre>
5414479Sbinkertn@umich.edu# Define a rule so we can track line numbers
5424479Sbinkertn@umich.edudef t_newline(t):
5434479Sbinkertn@umich.edu    r'\n+'
5444479Sbinkertn@umich.edu    t.lexer.lineno += len(t.value)
5454479Sbinkertn@umich.edu</pre>
5464479Sbinkertn@umich.edu</blockquote>
5474479Sbinkertn@umich.eduWithin the rule, the <tt>lineno</tt> attribute of the underlying lexer <tt>t.lexer</tt> is updated.
5484479Sbinkertn@umich.eduAfter the line number is updated, the token is simply discarded since nothing is returned.
5492632Sstever@eecs.umich.edu
5502632Sstever@eecs.umich.edu<p>
5514479Sbinkertn@umich.edu<tt>lex.py</tt> does not perform and kind of automatic column tracking.  However, it does record positional
5524479Sbinkertn@umich.eduinformation related to each token in the <tt>lexpos</tt> attribute.   Using this, it is usually possible to compute 
5534479Sbinkertn@umich.educolumn information as a separate step.   For instance, just count backwards until you reach a newline.
5544479Sbinkertn@umich.edu
5554479Sbinkertn@umich.edu<blockquote>
5564479Sbinkertn@umich.edu<pre>
5574479Sbinkertn@umich.edu# Compute column. 
5584479Sbinkertn@umich.edu#     input is the input text string
5594479Sbinkertn@umich.edu#     token is a token instance
5604479Sbinkertn@umich.edudef find_column(input,token):
5616498Snate@binkert.org    last_cr = input.rfind('\n',0,token.lexpos)
5626498Snate@binkert.org    if last_cr < 0:
5636498Snate@binkert.org	last_cr = 0
5646498Snate@binkert.org    column = (token.lexpos - last_cr) + 1
5654479Sbinkertn@umich.edu    return column
5664479Sbinkertn@umich.edu</pre>
5674479Sbinkertn@umich.edu</blockquote>
5684479Sbinkertn@umich.edu
5694479Sbinkertn@umich.eduSince column information is often only useful in the context of error handling, calculating the column
5704479Sbinkertn@umich.eduposition can be performed when needed as opposed to doing it for each token.
5714479Sbinkertn@umich.edu
5726498Snate@binkert.org<H3><a name="ply_nn10"></a>4.7 Ignored characters</H3>
5734479Sbinkertn@umich.edu
5742632Sstever@eecs.umich.edu
5752632Sstever@eecs.umich.edu<p>
5764479Sbinkertn@umich.eduThe special <tt>t_ignore</tt> rule is reserved by <tt>lex.py</tt> for characters
5774479Sbinkertn@umich.eduthat should be completely ignored in the input stream. 
5784479Sbinkertn@umich.eduUsually this is used to skip over whitespace and other non-essential characters. 
5794479Sbinkertn@umich.eduAlthough it is possible to define a regular expression rule for whitespace in a manner
5804479Sbinkertn@umich.edusimilar to <tt>t_newline()</tt>, the use of <tt>t_ignore</tt> provides substantially better
5814479Sbinkertn@umich.edulexing performance because it is handled as a special case and is checked in a much
5824479Sbinkertn@umich.edumore efficient manner than the normal regular expression rules.
5834479Sbinkertn@umich.edu
5846498Snate@binkert.org<H3><a name="ply_nn11"></a>4.8 Literal characters</H3>
5854479Sbinkertn@umich.edu
5864479Sbinkertn@umich.edu
5874479Sbinkertn@umich.edu<p>
5884479Sbinkertn@umich.eduLiteral characters can be specified by defining a variable <tt>literals</tt> in your lexing module.  For example:
5894479Sbinkertn@umich.edu
5904479Sbinkertn@umich.edu<blockquote>
5914479Sbinkertn@umich.edu<pre>
5924479Sbinkertn@umich.eduliterals = [ '+','-','*','/' ]
5934479Sbinkertn@umich.edu</pre>
5944479Sbinkertn@umich.edu</blockquote>
5954479Sbinkertn@umich.edu
5964479Sbinkertn@umich.eduor alternatively
5974479Sbinkertn@umich.edu
5984479Sbinkertn@umich.edu<blockquote>
5994479Sbinkertn@umich.edu<pre>
6004479Sbinkertn@umich.eduliterals = "+-*/"
6014479Sbinkertn@umich.edu</pre>
6024479Sbinkertn@umich.edu</blockquote>
6034479Sbinkertn@umich.edu
6044479Sbinkertn@umich.eduA literal character is simply a single character that is returned "as is" when encountered by the lexer.  Literals are checked
6054479Sbinkertn@umich.eduafter all of the defined regular expression rules.  Thus, if a rule starts with one of the literal characters, it will always 
6064479Sbinkertn@umich.edutake precedence.
6074479Sbinkertn@umich.edu<p>
6084479Sbinkertn@umich.eduWhen a literal token is returned, both its <tt>type</tt> and <tt>value</tt> attributes are set to the character itself. For example, <tt>'+'</tt>.
6094479Sbinkertn@umich.edu
6106498Snate@binkert.org<H3><a name="ply_nn12"></a>4.9 Error handling</H3>
6114479Sbinkertn@umich.edu
6124479Sbinkertn@umich.edu
6134479Sbinkertn@umich.edu<p>
6144479Sbinkertn@umich.eduFinally, the <tt>t_error()</tt>
6154479Sbinkertn@umich.edufunction is used to handle lexing errors that occur when illegal
6164479Sbinkertn@umich.educharacters are detected.  In this case, the <tt>t.value</tt> attribute contains the
6174479Sbinkertn@umich.edurest of the input string that has not been tokenized.  In the example, the error function
6184479Sbinkertn@umich.eduwas defined as follows:
6194479Sbinkertn@umich.edu
6204479Sbinkertn@umich.edu<blockquote>
6214479Sbinkertn@umich.edu<pre>
6224479Sbinkertn@umich.edu# Error handling rule
6234479Sbinkertn@umich.edudef t_error(t):
6244479Sbinkertn@umich.edu    print "Illegal character '%s'" % t.value[0]
6254479Sbinkertn@umich.edu    t.lexer.skip(1)
6264479Sbinkertn@umich.edu</pre>
6274479Sbinkertn@umich.edu</blockquote>
6284479Sbinkertn@umich.edu
6294479Sbinkertn@umich.eduIn this case, we simply print the offending character and skip ahead one character by calling <tt>t.lexer.skip(1)</tt>.
6304479Sbinkertn@umich.edu
6316498Snate@binkert.org<H3><a name="ply_nn13"></a>4.10 Building and using the lexer</H3>
6324479Sbinkertn@umich.edu
6334479Sbinkertn@umich.edu
6344479Sbinkertn@umich.edu<p>
6354479Sbinkertn@umich.eduTo build the lexer, the function <tt>lex.lex()</tt> is used.  This function
6364479Sbinkertn@umich.eduuses Python reflection (or introspection) to read the the regular expression rules
6376498Snate@binkert.orgout of the calling context and build the lexer. Once the lexer has been built, two methods can
6384479Sbinkertn@umich.edube used to control the lexer.
6394479Sbinkertn@umich.edu
6404479Sbinkertn@umich.edu<ul>
6416498Snate@binkert.org<li><tt>lexer.input(data)</tt>.   Reset the lexer and store a new input string.
6426498Snate@binkert.org<li><tt>lexer.token()</tt>.  Return the next token.  Returns a special <tt>LexToken</tt> instance on success or
6434479Sbinkertn@umich.eduNone if the end of the input text has been reached.
6444479Sbinkertn@umich.edu</ul>
6454479Sbinkertn@umich.edu
6466498Snate@binkert.orgThe preferred way to use PLY is to invoke the above methods directly on the lexer object returned by the
6476498Snate@binkert.org<tt>lex()</tt> function.   The legacy interface to PLY involves module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt>.
6486498Snate@binkert.orgFor example:
6492632Sstever@eecs.umich.edu
6502632Sstever@eecs.umich.edu<blockquote>
6512632Sstever@eecs.umich.edu<pre>
6526498Snate@binkert.orglex.lex()
6536498Snate@binkert.orglex.input(sometext)
6542632Sstever@eecs.umich.eduwhile 1:
6556498Snate@binkert.org    tok = lex.token()
6562632Sstever@eecs.umich.edu    if not tok: break
6572632Sstever@eecs.umich.edu    print tok
6582632Sstever@eecs.umich.edu</pre>
6592632Sstever@eecs.umich.edu</blockquote>
6602632Sstever@eecs.umich.edu
6614479Sbinkertn@umich.edu<p>
6626498Snate@binkert.orgIn this example, the module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt> are bound to the <tt>input()</tt> 
6636498Snate@binkert.organd <tt>token()</tt> methods of the last lexer created by the lex module.    This interface may go away at some point so
6646498Snate@binkert.orgit's probably best not to use it.
6656498Snate@binkert.org
6666498Snate@binkert.org<H3><a name="ply_nn14"></a>4.11 The @TOKEN decorator</H3>
6674479Sbinkertn@umich.edu
6684479Sbinkertn@umich.edu
6694479Sbinkertn@umich.eduIn some applications, you may want to define build tokens from as a series of
6704479Sbinkertn@umich.edumore complex regular expression rules.  For example:
6712632Sstever@eecs.umich.edu
6722632Sstever@eecs.umich.edu<blockquote>
6732632Sstever@eecs.umich.edu<pre>
6744479Sbinkertn@umich.edudigit            = r'([0-9])'
6754479Sbinkertn@umich.edunondigit         = r'([_A-Za-z])'
6764479Sbinkertn@umich.eduidentifier       = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'        
6774479Sbinkertn@umich.edu
6784479Sbinkertn@umich.edudef t_ID(t):
6794479Sbinkertn@umich.edu    # want docstring to be identifier above. ?????
6804479Sbinkertn@umich.edu    ...
6812632Sstever@eecs.umich.edu</pre>
6822632Sstever@eecs.umich.edu</blockquote>
6832632Sstever@eecs.umich.edu
6844479Sbinkertn@umich.eduIn this case, we want the regular expression rule for <tt>ID</tt> to be one of the variables above. However, there is no
6854479Sbinkertn@umich.eduway to directly specify this using a normal documentation string.   To solve this problem, you can use the <tt>@TOKEN</tt>
6864479Sbinkertn@umich.edudecorator.  For example:
6872632Sstever@eecs.umich.edu
6882632Sstever@eecs.umich.edu<blockquote>
6892632Sstever@eecs.umich.edu<pre>
6904479Sbinkertn@umich.edufrom ply.lex import TOKEN
6914479Sbinkertn@umich.edu
6924479Sbinkertn@umich.edu@TOKEN(identifier)
6934479Sbinkertn@umich.edudef t_ID(t):
6944479Sbinkertn@umich.edu    ...
6952632Sstever@eecs.umich.edu</pre>
6962632Sstever@eecs.umich.edu</blockquote>
6972632Sstever@eecs.umich.edu
6984479Sbinkertn@umich.eduThis will attach <tt>identifier</tt> to the docstring for <tt>t_ID()</tt> allowing <tt>lex.py</tt> to work normally.  An alternative
6994479Sbinkertn@umich.eduapproach this problem is to set the docstring directly like this:
7004479Sbinkertn@umich.edu
7014479Sbinkertn@umich.edu<blockquote>
7024479Sbinkertn@umich.edu<pre>
7034479Sbinkertn@umich.edudef t_ID(t):
7044479Sbinkertn@umich.edu    ...
7054479Sbinkertn@umich.edu
7064479Sbinkertn@umich.edut_ID.__doc__ = identifier
7074479Sbinkertn@umich.edu</pre>
7084479Sbinkertn@umich.edu</blockquote>
7094479Sbinkertn@umich.edu
7104479Sbinkertn@umich.edu<b>NOTE:</b> Use of <tt>@TOKEN</tt> requires Python-2.4 or newer.  If you're concerned about backwards compatibility with older
7114479Sbinkertn@umich.eduversions of Python, use the alternative approach of setting the docstring directly.
7124479Sbinkertn@umich.edu
7136498Snate@binkert.org<H3><a name="ply_nn15"></a>4.12 Optimized mode</H3>
7144479Sbinkertn@umich.edu
7154479Sbinkertn@umich.edu
7164479Sbinkertn@umich.eduFor improved performance, it may be desirable to use Python's
7174479Sbinkertn@umich.eduoptimized mode (e.g., running Python with the <tt>-O</tt>
7184479Sbinkertn@umich.eduoption). However, doing so causes Python to ignore documentation
7194479Sbinkertn@umich.edustrings.  This presents special problems for <tt>lex.py</tt>.  To
7204479Sbinkertn@umich.eduhandle this case, you can create your lexer using
7214479Sbinkertn@umich.eduthe <tt>optimize</tt> option as follows:
7224479Sbinkertn@umich.edu
7234479Sbinkertn@umich.edu<blockquote>
7244479Sbinkertn@umich.edu<pre>
7254479Sbinkertn@umich.edulexer = lex.lex(optimize=1)
7264479Sbinkertn@umich.edu</pre>
7274479Sbinkertn@umich.edu</blockquote>
7284479Sbinkertn@umich.edu
7294479Sbinkertn@umich.eduNext, run Python in its normal operating mode.  When you do
7304479Sbinkertn@umich.eduthis, <tt>lex.py</tt> will write a file called <tt>lextab.py</tt> to
7314479Sbinkertn@umich.eduthe current directory.  This file contains all of the regular
7324479Sbinkertn@umich.eduexpression rules and tables used during lexing.  On subsequent
7334479Sbinkertn@umich.eduexecutions,
7344479Sbinkertn@umich.edu<tt>lextab.py</tt> will simply be imported to build the lexer.  This
7354479Sbinkertn@umich.eduapproach substantially improves the startup time of the lexer and it
7364479Sbinkertn@umich.eduworks in Python's optimized mode.
7374479Sbinkertn@umich.edu
7382632Sstever@eecs.umich.edu<p>
7394479Sbinkertn@umich.eduTo change the name of the lexer-generated file, use the <tt>lextab</tt> keyword argument.  For example:
7404479Sbinkertn@umich.edu
7414479Sbinkertn@umich.edu<blockquote>
7424479Sbinkertn@umich.edu<pre>
7434479Sbinkertn@umich.edulexer = lex.lex(optimize=1,lextab="footab")
7444479Sbinkertn@umich.edu</pre>
7454479Sbinkertn@umich.edu</blockquote>
7464479Sbinkertn@umich.edu
7474479Sbinkertn@umich.eduWhen running in optimized mode, it is important to note that lex disables most error checking.  Thus, this is really only recommended
7484479Sbinkertn@umich.eduif you're sure everything is working correctly and you're ready to start releasing production code.
7494479Sbinkertn@umich.edu
7506498Snate@binkert.org<H3><a name="ply_nn16"></a>4.13 Debugging</H3>
7514479Sbinkertn@umich.edu
7524479Sbinkertn@umich.edu
7534479Sbinkertn@umich.eduFor the purpose of debugging, you can run <tt>lex()</tt> in a debugging mode as follows:
7544479Sbinkertn@umich.edu
7554479Sbinkertn@umich.edu<blockquote>
7564479Sbinkertn@umich.edu<pre>
7574479Sbinkertn@umich.edulexer = lex.lex(debug=1)
7584479Sbinkertn@umich.edu</pre>
7594479Sbinkertn@umich.edu</blockquote>
7604479Sbinkertn@umich.edu
7616498Snate@binkert.org<p>
7626498Snate@binkert.orgThis will produce various sorts of debugging information including all of the added rules,
7636498Snate@binkert.orgthe master regular expressions used by the lexer, and tokens generating during lexing.
7646498Snate@binkert.org</p>
7656498Snate@binkert.org
7666498Snate@binkert.org<p>
7674479Sbinkertn@umich.eduIn addition, <tt>lex.py</tt> comes with a simple main function which
7684479Sbinkertn@umich.eduwill either tokenize input read from standard input or from a file specified
7694479Sbinkertn@umich.eduon the command line. To use it, simply put this in your lexer:
7706498Snate@binkert.org</p>
7712632Sstever@eecs.umich.edu
7722632Sstever@eecs.umich.edu<blockquote>
7732632Sstever@eecs.umich.edu<pre>
7742632Sstever@eecs.umich.eduif __name__ == '__main__':
7752632Sstever@eecs.umich.edu     lex.runmain()
7762632Sstever@eecs.umich.edu</pre>
7772632Sstever@eecs.umich.edu</blockquote>
7782632Sstever@eecs.umich.edu
7796498Snate@binkert.orgPlease refer to the "Debugging" section near the end for some more advanced details 
7806498Snate@binkert.orgof debugging.
7816498Snate@binkert.org
7826498Snate@binkert.org<H3><a name="ply_nn17"></a>4.14 Alternative specification of lexers</H3>
7834479Sbinkertn@umich.edu
7844479Sbinkertn@umich.edu
7854479Sbinkertn@umich.eduAs shown in the example, lexers are specified all within one Python module.   If you want to
7864479Sbinkertn@umich.eduput token rules in a different module from the one in which you invoke <tt>lex()</tt>, use the
7874479Sbinkertn@umich.edu<tt>module</tt> keyword argument.
7884479Sbinkertn@umich.edu
7894479Sbinkertn@umich.edu<p>
7904479Sbinkertn@umich.eduFor example, you might have a dedicated module that just contains
7914479Sbinkertn@umich.eduthe token rules:
7924479Sbinkertn@umich.edu
7934479Sbinkertn@umich.edu<blockquote>
7944479Sbinkertn@umich.edu<pre>
7954479Sbinkertn@umich.edu# module: tokrules.py
7964479Sbinkertn@umich.edu# This module just contains the lexing rules
7974479Sbinkertn@umich.edu
7984479Sbinkertn@umich.edu# List of token names.   This is always required
7994479Sbinkertn@umich.edutokens = (
8004479Sbinkertn@umich.edu   'NUMBER',
8014479Sbinkertn@umich.edu   'PLUS',
8024479Sbinkertn@umich.edu   'MINUS',
8034479Sbinkertn@umich.edu   'TIMES',
8044479Sbinkertn@umich.edu   'DIVIDE',
8054479Sbinkertn@umich.edu   'LPAREN',
8064479Sbinkertn@umich.edu   'RPAREN',
8074479Sbinkertn@umich.edu)
8084479Sbinkertn@umich.edu
8094479Sbinkertn@umich.edu# Regular expression rules for simple tokens
8104479Sbinkertn@umich.edut_PLUS    = r'\+'
8114479Sbinkertn@umich.edut_MINUS   = r'-'
8124479Sbinkertn@umich.edut_TIMES   = r'\*'
8134479Sbinkertn@umich.edut_DIVIDE  = r'/'
8144479Sbinkertn@umich.edut_LPAREN  = r'\('
8154479Sbinkertn@umich.edut_RPAREN  = r'\)'
8164479Sbinkertn@umich.edu
8174479Sbinkertn@umich.edu# A regular expression rule with some action code
8184479Sbinkertn@umich.edudef t_NUMBER(t):
8194479Sbinkertn@umich.edu    r'\d+'
8206498Snate@binkert.org    t.value = int(t.value)    
8214479Sbinkertn@umich.edu    return t
8224479Sbinkertn@umich.edu
8234479Sbinkertn@umich.edu# Define a rule so we can track line numbers
8244479Sbinkertn@umich.edudef t_newline(t):
8254479Sbinkertn@umich.edu    r'\n+'
8264479Sbinkertn@umich.edu    t.lexer.lineno += len(t.value)
8274479Sbinkertn@umich.edu
8284479Sbinkertn@umich.edu# A string containing ignored characters (spaces and tabs)
8294479Sbinkertn@umich.edut_ignore  = ' \t'
8304479Sbinkertn@umich.edu
8314479Sbinkertn@umich.edu# Error handling rule
8324479Sbinkertn@umich.edudef t_error(t):
8334479Sbinkertn@umich.edu    print "Illegal character '%s'" % t.value[0]
8344479Sbinkertn@umich.edu    t.lexer.skip(1)
8354479Sbinkertn@umich.edu</pre>
8364479Sbinkertn@umich.edu</blockquote>
8374479Sbinkertn@umich.edu
8384479Sbinkertn@umich.eduNow, if you wanted to build a tokenizer from these rules from within a different module, you would do the following (shown for Python interactive mode):
8394479Sbinkertn@umich.edu
8404479Sbinkertn@umich.edu<blockquote>
8414479Sbinkertn@umich.edu<pre>
8424479Sbinkertn@umich.edu>>> import tokrules
8434479Sbinkertn@umich.edu>>> <b>lexer = lex.lex(module=tokrules)</b>
8444479Sbinkertn@umich.edu>>> lexer.input("3 + 4")
8454479Sbinkertn@umich.edu>>> lexer.token()
8464479Sbinkertn@umich.eduLexToken(NUMBER,3,1,1,0)
8474479Sbinkertn@umich.edu>>> lexer.token()
8484479Sbinkertn@umich.eduLexToken(PLUS,'+',1,2)
8494479Sbinkertn@umich.edu>>> lexer.token()
8504479Sbinkertn@umich.eduLexToken(NUMBER,4,1,4)
8514479Sbinkertn@umich.edu>>> lexer.token()
8524479Sbinkertn@umich.eduNone
8534479Sbinkertn@umich.edu>>>
8544479Sbinkertn@umich.edu</pre>
8554479Sbinkertn@umich.edu</blockquote>
8564479Sbinkertn@umich.edu
8576498Snate@binkert.orgThe <tt>module</tt> option can also be used to define lexers from instances of a class.  For example:
8584479Sbinkertn@umich.edu
8594479Sbinkertn@umich.edu<blockquote>
8604479Sbinkertn@umich.edu<pre>
8614479Sbinkertn@umich.eduimport ply.lex as lex
8624479Sbinkertn@umich.edu
8634479Sbinkertn@umich.educlass MyLexer:
8644479Sbinkertn@umich.edu    # List of token names.   This is always required
8654479Sbinkertn@umich.edu    tokens = (
8664479Sbinkertn@umich.edu       'NUMBER',
8674479Sbinkertn@umich.edu       'PLUS',
8684479Sbinkertn@umich.edu       'MINUS',
8694479Sbinkertn@umich.edu       'TIMES',
8704479Sbinkertn@umich.edu       'DIVIDE',
8714479Sbinkertn@umich.edu       'LPAREN',
8724479Sbinkertn@umich.edu       'RPAREN',
8734479Sbinkertn@umich.edu    )
8744479Sbinkertn@umich.edu
8754479Sbinkertn@umich.edu    # Regular expression rules for simple tokens
8764479Sbinkertn@umich.edu    t_PLUS    = r'\+'
8774479Sbinkertn@umich.edu    t_MINUS   = r'-'
8784479Sbinkertn@umich.edu    t_TIMES   = r'\*'
8794479Sbinkertn@umich.edu    t_DIVIDE  = r'/'
8804479Sbinkertn@umich.edu    t_LPAREN  = r'\('
8814479Sbinkertn@umich.edu    t_RPAREN  = r'\)'
8824479Sbinkertn@umich.edu
8834479Sbinkertn@umich.edu    # A regular expression rule with some action code
8844479Sbinkertn@umich.edu    # Note addition of self parameter since we're in a class
8854479Sbinkertn@umich.edu    def t_NUMBER(self,t):
8864479Sbinkertn@umich.edu        r'\d+'
8876498Snate@binkert.org        t.value = int(t.value)    
8884479Sbinkertn@umich.edu        return t
8894479Sbinkertn@umich.edu
8904479Sbinkertn@umich.edu    # Define a rule so we can track line numbers
8914479Sbinkertn@umich.edu    def t_newline(self,t):
8924479Sbinkertn@umich.edu        r'\n+'
8934479Sbinkertn@umich.edu        t.lexer.lineno += len(t.value)
8944479Sbinkertn@umich.edu
8954479Sbinkertn@umich.edu    # A string containing ignored characters (spaces and tabs)
8964479Sbinkertn@umich.edu    t_ignore  = ' \t'
8974479Sbinkertn@umich.edu
8984479Sbinkertn@umich.edu    # Error handling rule
8994479Sbinkertn@umich.edu    def t_error(self,t):
9004479Sbinkertn@umich.edu        print "Illegal character '%s'" % t.value[0]
9014479Sbinkertn@umich.edu        t.lexer.skip(1)
9024479Sbinkertn@umich.edu
9034479Sbinkertn@umich.edu    <b># Build the lexer
9044479Sbinkertn@umich.edu    def build(self,**kwargs):
9056498Snate@binkert.org        self.lexer = lex.lex(module=self, **kwargs)</b>
9064479Sbinkertn@umich.edu    
9074479Sbinkertn@umich.edu    # Test it output
9084479Sbinkertn@umich.edu    def test(self,data):
9094479Sbinkertn@umich.edu        self.lexer.input(data)
9106498Snate@binkert.org        while True:
9114479Sbinkertn@umich.edu             tok = lexer.token()
9124479Sbinkertn@umich.edu             if not tok: break
9134479Sbinkertn@umich.edu             print tok
9144479Sbinkertn@umich.edu
9154479Sbinkertn@umich.edu# Build the lexer and try it out
9164479Sbinkertn@umich.edum = MyLexer()
9174479Sbinkertn@umich.edum.build()           # Build the lexer
9184479Sbinkertn@umich.edum.test("3 + 4")     # Test it
9194479Sbinkertn@umich.edu</pre>
9204479Sbinkertn@umich.edu</blockquote>
9214479Sbinkertn@umich.edu
9226498Snate@binkert.org
9236498Snate@binkert.orgWhen building a lexer from class, <em>you should construct the lexer from
9246498Snate@binkert.organ instance of the class</em>, not the class object itself.  This is because
9256498Snate@binkert.orgPLY only works properly if the lexer actions are defined by bound-methods.
9266498Snate@binkert.org
9276498Snate@binkert.org<p>
9286498Snate@binkert.orgWhen using the <tt>module</tt> option to <tt>lex()</tt>, PLY collects symbols
9296498Snate@binkert.orgfrom the underlying object using the <tt>dir()</tt> function. There is no
9306498Snate@binkert.orgdirect access to the <tt>__dict__</tt> attribute of the object supplied as a 
9316498Snate@binkert.orgmodule value.
9326498Snate@binkert.org
9336498Snate@binkert.org<P>
9346498Snate@binkert.orgFinally, if you want to keep things nicely encapsulated, but don't want to use a 
9356498Snate@binkert.orgfull-fledged class definition, lexers can be defined using closures.  For example:
9366498Snate@binkert.org
9376498Snate@binkert.org<blockquote>
9386498Snate@binkert.org<pre>
9396498Snate@binkert.orgimport ply.lex as lex
9406498Snate@binkert.org
9416498Snate@binkert.org# List of token names.   This is always required
9426498Snate@binkert.orgtokens = (
9436498Snate@binkert.org  'NUMBER',
9446498Snate@binkert.org  'PLUS',
9456498Snate@binkert.org  'MINUS',
9466498Snate@binkert.org  'TIMES',
9476498Snate@binkert.org  'DIVIDE',
9486498Snate@binkert.org  'LPAREN',
9496498Snate@binkert.org  'RPAREN',
9506498Snate@binkert.org)
9516498Snate@binkert.org
9526498Snate@binkert.orgdef MyLexer():
9536498Snate@binkert.org    # Regular expression rules for simple tokens
9546498Snate@binkert.org    t_PLUS    = r'\+'
9556498Snate@binkert.org    t_MINUS   = r'-'
9566498Snate@binkert.org    t_TIMES   = r'\*'
9576498Snate@binkert.org    t_DIVIDE  = r'/'
9586498Snate@binkert.org    t_LPAREN  = r'\('
9596498Snate@binkert.org    t_RPAREN  = r'\)'
9606498Snate@binkert.org
9616498Snate@binkert.org    # A regular expression rule with some action code
9626498Snate@binkert.org    def t_NUMBER(t):
9636498Snate@binkert.org        r'\d+'
9646498Snate@binkert.org        t.value = int(t.value)    
9656498Snate@binkert.org        return t
9666498Snate@binkert.org
9676498Snate@binkert.org    # Define a rule so we can track line numbers
9686498Snate@binkert.org    def t_newline(t):
9696498Snate@binkert.org        r'\n+'
9706498Snate@binkert.org        t.lexer.lineno += len(t.value)
9716498Snate@binkert.org
9726498Snate@binkert.org    # A string containing ignored characters (spaces and tabs)
9736498Snate@binkert.org    t_ignore  = ' \t'
9746498Snate@binkert.org
9756498Snate@binkert.org    # Error handling rule
9766498Snate@binkert.org    def t_error(t):
9776498Snate@binkert.org        print "Illegal character '%s'" % t.value[0]
9786498Snate@binkert.org        t.lexer.skip(1)
9796498Snate@binkert.org
9806498Snate@binkert.org    # Build the lexer from my environment and return it    
9816498Snate@binkert.org    return lex.lex()
9826498Snate@binkert.org</pre>
9836498Snate@binkert.org</blockquote>
9846498Snate@binkert.org
9856498Snate@binkert.org
9866498Snate@binkert.org<H3><a name="ply_nn18"></a>4.15 Maintaining state</H3>
9876498Snate@binkert.org
9886498Snate@binkert.org
9896498Snate@binkert.orgIn your lexer, you may want to maintain a variety of state
9906498Snate@binkert.orginformation.  This might include mode settings, symbol tables, and
9916498Snate@binkert.orgother details.  As an example, suppose that you wanted to keep
9926498Snate@binkert.orgtrack of how many NUMBER tokens had been encountered.  
9936498Snate@binkert.org
9946498Snate@binkert.org<p>
9956498Snate@binkert.orgOne way to do this is to keep a set of global variables in the module
9966498Snate@binkert.orgwhere you created the lexer.  For example: 
9974479Sbinkertn@umich.edu
9984479Sbinkertn@umich.edu<blockquote>
9994479Sbinkertn@umich.edu<pre>
10004479Sbinkertn@umich.edunum_count = 0
10014479Sbinkertn@umich.edudef t_NUMBER(t):
10024479Sbinkertn@umich.edu    r'\d+'
10034479Sbinkertn@umich.edu    global num_count
10044479Sbinkertn@umich.edu    num_count += 1
10056498Snate@binkert.org    t.value = int(t.value)    
10064479Sbinkertn@umich.edu    return t
10074479Sbinkertn@umich.edu</pre>
10084479Sbinkertn@umich.edu</blockquote>
10094479Sbinkertn@umich.edu
10106498Snate@binkert.orgIf you don't like the use of a global variable, another place to store
10116498Snate@binkert.orginformation is inside the Lexer object created by <tt>lex()</tt>.
10126498Snate@binkert.orgTo this, you can use the <tt>lexer</tt> attribute of tokens passed to
10136498Snate@binkert.orgthe various rules. For example:
10144479Sbinkertn@umich.edu
10154479Sbinkertn@umich.edu<blockquote>
10164479Sbinkertn@umich.edu<pre>
10174479Sbinkertn@umich.edudef t_NUMBER(t):
10184479Sbinkertn@umich.edu    r'\d+'
10194479Sbinkertn@umich.edu    t.lexer.num_count += 1     # Note use of lexer attribute
10206498Snate@binkert.org    t.value = int(t.value)    
10214479Sbinkertn@umich.edu    return t
10224479Sbinkertn@umich.edu
10234479Sbinkertn@umich.edulexer = lex.lex()
10244479Sbinkertn@umich.edulexer.num_count = 0            # Set the initial count
10254479Sbinkertn@umich.edu</pre>
10264479Sbinkertn@umich.edu</blockquote>
10274479Sbinkertn@umich.edu
10286498Snate@binkert.orgThis latter approach has the advantage of being simple and working 
10296498Snate@binkert.orgcorrectly in applications where multiple instantiations of a given
10306498Snate@binkert.orglexer exist in the same application.   However, this might also feel
10316498Snate@binkert.orglike a gross violation of encapsulation to OO purists. 
10326498Snate@binkert.orgJust to put your mind at some ease, all
10334479Sbinkertn@umich.eduinternal attributes of the lexer (with the exception of <tt>lineno</tt>) have names that are prefixed
10344479Sbinkertn@umich.eduby <tt>lex</tt> (e.g., <tt>lexdata</tt>,<tt>lexpos</tt>, etc.).  Thus,
10356498Snate@binkert.orgit is perfectly safe to store attributes in the lexer that
10366498Snate@binkert.orgdon't have names starting with that prefix or a name that conlicts with one of the
10376498Snate@binkert.orgpredefined methods (e.g., <tt>input()</tt>, <tt>token()</tt>, etc.).
10384479Sbinkertn@umich.edu
10394479Sbinkertn@umich.edu<p>
10406498Snate@binkert.orgIf you don't like assigning values on the lexer object, you can define your lexer as a class as
10416498Snate@binkert.orgshown in the previous section:
10424479Sbinkertn@umich.edu
10434479Sbinkertn@umich.edu<blockquote>
10444479Sbinkertn@umich.edu<pre>
10454479Sbinkertn@umich.educlass MyLexer:
10464479Sbinkertn@umich.edu    ...
10474479Sbinkertn@umich.edu    def t_NUMBER(self,t):
10484479Sbinkertn@umich.edu        r'\d+'
10494479Sbinkertn@umich.edu        self.num_count += 1
10506498Snate@binkert.org        t.value = int(t.value)    
10514479Sbinkertn@umich.edu        return t
10524479Sbinkertn@umich.edu
10534479Sbinkertn@umich.edu    def build(self, **kwargs):
10544479Sbinkertn@umich.edu        self.lexer = lex.lex(object=self,**kwargs)
10554479Sbinkertn@umich.edu
10564479Sbinkertn@umich.edu    def __init__(self):
10574479Sbinkertn@umich.edu        self.num_count = 0
10584479Sbinkertn@umich.edu</pre>
10594479Sbinkertn@umich.edu</blockquote>
10604479Sbinkertn@umich.edu
10616498Snate@binkert.orgThe class approach may be the easiest to manage if your application is
10626498Snate@binkert.orggoing to be creating multiple instances of the same lexer and you need
10636498Snate@binkert.orgto manage a lot of state.
10644479Sbinkertn@umich.edu
10654479Sbinkertn@umich.edu<p>
10666498Snate@binkert.orgState can also be managed through closures.   For example, in Python 3:
10676498Snate@binkert.org
10686498Snate@binkert.org<blockquote>
10696498Snate@binkert.org<pre>
10706498Snate@binkert.orgdef MyLexer():
10716498Snate@binkert.org    num_count = 0
10726498Snate@binkert.org    ...
10736498Snate@binkert.org    def t_NUMBER(t):
10746498Snate@binkert.org        r'\d+'
10756498Snate@binkert.org        nonlocal num_count
10766498Snate@binkert.org        num_count += 1
10776498Snate@binkert.org        t.value = int(t.value)    
10786498Snate@binkert.org        return t
10796498Snate@binkert.org    ...
10806498Snate@binkert.org</pre>
10816498Snate@binkert.org</blockquote>
10826498Snate@binkert.org
10836498Snate@binkert.org<H3><a name="ply_nn19"></a>4.16 Lexer cloning</H3>
10846498Snate@binkert.org
10856498Snate@binkert.org
10866498Snate@binkert.org<p>
10876498Snate@binkert.orgIf necessary, a lexer object can be duplicated by invoking its <tt>clone()</tt> method.  For example:
10884479Sbinkertn@umich.edu
10894479Sbinkertn@umich.edu<blockquote>
10904479Sbinkertn@umich.edu<pre>
10914479Sbinkertn@umich.edulexer = lex.lex()
10924479Sbinkertn@umich.edu...
10934479Sbinkertn@umich.edunewlexer = lexer.clone()
10944479Sbinkertn@umich.edu</pre>
10954479Sbinkertn@umich.edu</blockquote>
10964479Sbinkertn@umich.edu
10976498Snate@binkert.orgWhen a lexer is cloned, the copy is exactly identical to the original lexer
10986498Snate@binkert.orgincluding any input text and internal state. However, the clone allows a
10996498Snate@binkert.orgdifferent set of input text to be supplied which may be processed separately.
11006498Snate@binkert.orgThis may be useful in situations when you are writing a parser/compiler that
11014479Sbinkertn@umich.eduinvolves recursive or reentrant processing.  For instance, if you
11024479Sbinkertn@umich.eduneeded to scan ahead in the input for some reason, you could create a
11036498Snate@binkert.orgclone and use it to look ahead.  Or, if you were implementing some kind of preprocessor,
11046498Snate@binkert.orgcloned lexers could be used to handle different input files.
11054479Sbinkertn@umich.edu
11064479Sbinkertn@umich.edu<p>
11076498Snate@binkert.orgCreating a clone is different than calling <tt>lex.lex()</tt> in that
11086498Snate@binkert.orgPLY doesn't regenerate any of the internal tables or regular expressions.  So,
11094479Sbinkertn@umich.edu
11104479Sbinkertn@umich.edu<p>
11116498Snate@binkert.orgSpecial considerations need to be made when cloning lexers that also
11126498Snate@binkert.orgmaintain their own internal state using classes or closures.  Namely,
11136498Snate@binkert.orgyou need to be aware that the newly created lexers will share all of
11146498Snate@binkert.orgthis state with the original lexer.  For example, if you defined a
11156498Snate@binkert.orglexer as a class and did this:
11164479Sbinkertn@umich.edu
11174479Sbinkertn@umich.edu<blockquote>
11184479Sbinkertn@umich.edu<pre>
11194479Sbinkertn@umich.edum = MyLexer()
11204479Sbinkertn@umich.edua = lex.lex(object=m)      # Create a lexer
11214479Sbinkertn@umich.edu
11224479Sbinkertn@umich.edub = a.clone()              # Clone the lexer
11234479Sbinkertn@umich.edu</pre>
11244479Sbinkertn@umich.edu</blockquote>
11254479Sbinkertn@umich.edu
11264479Sbinkertn@umich.eduThen both <tt>a</tt> and <tt>b</tt> are going to be bound to the same
11276498Snate@binkert.orgobject <tt>m</tt> and any changes to <tt>m</tt> will be reflected in both lexers.  It's
11286498Snate@binkert.orgimportant to emphasize that <tt>clone()</tt> is only meant to create a new lexer
11296498Snate@binkert.orgthat reuses the regular expressions and environment of another lexer.  If you
11306498Snate@binkert.orgneed to make a totally new copy of a lexer, then call <tt>lex()</tt> again.
11316498Snate@binkert.org
11326498Snate@binkert.org<H3><a name="ply_nn20"></a>4.17 Internal lexer state</H3>
11334479Sbinkertn@umich.edu
11344479Sbinkertn@umich.edu
11354479Sbinkertn@umich.eduA Lexer object <tt>lexer</tt> has a number of internal attributes that may be useful in certain
11364479Sbinkertn@umich.edusituations. 
11374479Sbinkertn@umich.edu
11384479Sbinkertn@umich.edu<p>
11394479Sbinkertn@umich.edu<tt>lexer.lexpos</tt>
11404479Sbinkertn@umich.edu<blockquote>
11414479Sbinkertn@umich.eduThis attribute is an integer that contains the current position within the input text.  If you modify
11424479Sbinkertn@umich.eduthe value, it will change the result of the next call to <tt>token()</tt>.  Within token rule functions, this points
11434479Sbinkertn@umich.eduto the first character <em>after</em> the matched text.  If the value is modified within a rule, the next returned token will be
11444479Sbinkertn@umich.edumatched at the new position.
11454479Sbinkertn@umich.edu</blockquote>
11464479Sbinkertn@umich.edu
11474479Sbinkertn@umich.edu<p>
11484479Sbinkertn@umich.edu<tt>lexer.lineno</tt>
11494479Sbinkertn@umich.edu<blockquote>
11506498Snate@binkert.orgThe current value of the line number attribute stored in the lexer.  PLY only specifies that the attribute
11516498Snate@binkert.orgexists---it never sets, updates, or performs any processing with it.  If you want to track line numbers,
11526498Snate@binkert.orgyou will need to add code yourself (see the section on line numbers and positional information).
11534479Sbinkertn@umich.edu</blockquote>
11544479Sbinkertn@umich.edu
11554479Sbinkertn@umich.edu<p>
11564479Sbinkertn@umich.edu<tt>lexer.lexdata</tt>
11574479Sbinkertn@umich.edu<blockquote>
11584479Sbinkertn@umich.eduThe current input text stored in the lexer.  This is the string passed with the <tt>input()</tt> method. It
11594479Sbinkertn@umich.eduwould probably be a bad idea to modify this unless you really know what you're doing.
11604479Sbinkertn@umich.edu</blockquote>
11614479Sbinkertn@umich.edu
11624479Sbinkertn@umich.edu<P>
11634479Sbinkertn@umich.edu<tt>lexer.lexmatch</tt>
11644479Sbinkertn@umich.edu<blockquote>
11654479Sbinkertn@umich.eduThis is the raw <tt>Match</tt> object returned by the Python <tt>re.match()</tt> function (used internally by PLY) for the
11664479Sbinkertn@umich.educurrent token.  If you have written a regular expression that contains named groups, you can use this to retrieve those values.
11676498Snate@binkert.orgNote: This attribute is only updated when tokens are defined and processed by functions.  
11684479Sbinkertn@umich.edu</blockquote>
11694479Sbinkertn@umich.edu
11706498Snate@binkert.org<H3><a name="ply_nn21"></a>4.18 Conditional lexing and start conditions</H3>
11714479Sbinkertn@umich.edu
11724479Sbinkertn@umich.edu
11734479Sbinkertn@umich.eduIn advanced parsing applications, it may be useful to have different
11744479Sbinkertn@umich.edulexing states. For instance, you may want the occurrence of a certain
11754479Sbinkertn@umich.edutoken or syntactic construct to trigger a different kind of lexing.
11764479Sbinkertn@umich.eduPLY supports a feature that allows the underlying lexer to be put into
11774479Sbinkertn@umich.edua series of different states.  Each state can have its own tokens,
11784479Sbinkertn@umich.edulexing rules, and so forth.  The implementation is based largely on
11794479Sbinkertn@umich.eduthe "start condition" feature of GNU flex.  Details of this can be found
11804479Sbinkertn@umich.eduat <a
11814479Sbinkertn@umich.eduhref="http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html">http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html.</a>.
11824479Sbinkertn@umich.edu
11834479Sbinkertn@umich.edu<p>
11844479Sbinkertn@umich.eduTo define a new lexing state, it must first be declared.  This is done by including a "states" declaration in your
11854479Sbinkertn@umich.edulex file.  For example:
11864479Sbinkertn@umich.edu
11874479Sbinkertn@umich.edu<blockquote>
11884479Sbinkertn@umich.edu<pre>
11894479Sbinkertn@umich.edustates = (
11904479Sbinkertn@umich.edu   ('foo','exclusive'),
11914479Sbinkertn@umich.edu   ('bar','inclusive'),
11924479Sbinkertn@umich.edu)
11934479Sbinkertn@umich.edu</pre>
11944479Sbinkertn@umich.edu</blockquote>
11954479Sbinkertn@umich.edu
11964479Sbinkertn@umich.eduThis declaration declares two states, <tt>'foo'</tt>
11974479Sbinkertn@umich.eduand <tt>'bar'</tt>.  States may be of two types; <tt>'exclusive'</tt>
11984479Sbinkertn@umich.eduand <tt>'inclusive'</tt>.  An exclusive state completely overrides the
11994479Sbinkertn@umich.edudefault behavior of the lexer.  That is, lex will only return tokens
12004479Sbinkertn@umich.eduand apply rules defined specifically for that state.  An inclusive
12014479Sbinkertn@umich.edustate adds additional tokens and rules to the default set of rules.
12024479Sbinkertn@umich.eduThus, lex will return both the tokens defined by default in addition
12034479Sbinkertn@umich.eduto those defined for the inclusive state.
12044479Sbinkertn@umich.edu
12054479Sbinkertn@umich.edu<p>
12064479Sbinkertn@umich.eduOnce a state has been declared, tokens and rules are declared by including the
12074479Sbinkertn@umich.edustate name in token/rule declaration.  For example:
12084479Sbinkertn@umich.edu
12094479Sbinkertn@umich.edu<blockquote>
12104479Sbinkertn@umich.edu<pre>
12114479Sbinkertn@umich.edut_foo_NUMBER = r'\d+'                      # Token 'NUMBER' in state 'foo'        
12124479Sbinkertn@umich.edut_bar_ID     = r'[a-zA-Z_][a-zA-Z0-9_]*'   # Token 'ID' in state 'bar'
12134479Sbinkertn@umich.edu
12144479Sbinkertn@umich.edudef t_foo_newline(t):
12154479Sbinkertn@umich.edu    r'\n'
12164479Sbinkertn@umich.edu    t.lexer.lineno += 1
12174479Sbinkertn@umich.edu</pre>
12184479Sbinkertn@umich.edu</blockquote>
12194479Sbinkertn@umich.edu
12204479Sbinkertn@umich.eduA token can be declared in multiple states by including multiple state names in the declaration. For example:
12214479Sbinkertn@umich.edu
12224479Sbinkertn@umich.edu<blockquote>
12234479Sbinkertn@umich.edu<pre>
12244479Sbinkertn@umich.edut_foo_bar_NUMBER = r'\d+'         # Defines token 'NUMBER' in both state 'foo' and 'bar'
12254479Sbinkertn@umich.edu</pre>
12264479Sbinkertn@umich.edu</blockquote>
12274479Sbinkertn@umich.edu
12284479Sbinkertn@umich.eduAlternative, a token can be declared in all states using the 'ANY' in the name.
12294479Sbinkertn@umich.edu
12304479Sbinkertn@umich.edu<blockquote>
12314479Sbinkertn@umich.edu<pre>
12324479Sbinkertn@umich.edut_ANY_NUMBER = r'\d+'         # Defines a token 'NUMBER' in all states
12334479Sbinkertn@umich.edu</pre>
12344479Sbinkertn@umich.edu</blockquote>
12354479Sbinkertn@umich.edu
12364479Sbinkertn@umich.eduIf no state name is supplied, as is normally the case, the token is associated with a special state <tt>'INITIAL'</tt>.  For example,
12374479Sbinkertn@umich.eduthese two declarations are identical:
12384479Sbinkertn@umich.edu
12394479Sbinkertn@umich.edu<blockquote>
12404479Sbinkertn@umich.edu<pre>
12414479Sbinkertn@umich.edut_NUMBER = r'\d+'
12424479Sbinkertn@umich.edut_INITIAL_NUMBER = r'\d+'
12434479Sbinkertn@umich.edu</pre>
12444479Sbinkertn@umich.edu</blockquote>
12454479Sbinkertn@umich.edu
12464479Sbinkertn@umich.edu<p>
12474479Sbinkertn@umich.eduStates are also associated with the special <tt>t_ignore</tt> and <tt>t_error()</tt> declarations.  For example, if a state treats
12484479Sbinkertn@umich.eduthese differently, you can declare:
12494479Sbinkertn@umich.edu
12504479Sbinkertn@umich.edu<blockquote>
12514479Sbinkertn@umich.edu<pre>
12524479Sbinkertn@umich.edut_foo_ignore = " \t\n"       # Ignored characters for state 'foo'
12534479Sbinkertn@umich.edu
12544479Sbinkertn@umich.edudef t_bar_error(t):          # Special error handler for state 'bar'
12554479Sbinkertn@umich.edu    pass 
12564479Sbinkertn@umich.edu</pre>
12574479Sbinkertn@umich.edu</blockquote>
12584479Sbinkertn@umich.edu
12594479Sbinkertn@umich.eduBy default, lexing operates in the <tt>'INITIAL'</tt> state.  This state includes all of the normally defined tokens. 
12604479Sbinkertn@umich.eduFor users who aren't using different states, this fact is completely transparent.   If, during lexing or parsing, you want to change
12614479Sbinkertn@umich.eduthe lexing state, use the <tt>begin()</tt> method.   For example:
12624479Sbinkertn@umich.edu
12634479Sbinkertn@umich.edu<blockquote>
12644479Sbinkertn@umich.edu<pre>
12654479Sbinkertn@umich.edudef t_begin_foo(t):
12664479Sbinkertn@umich.edu    r'start_foo'
12674479Sbinkertn@umich.edu    t.lexer.begin('foo')             # Starts 'foo' state
12684479Sbinkertn@umich.edu</pre>
12694479Sbinkertn@umich.edu</blockquote>
12704479Sbinkertn@umich.edu
12714479Sbinkertn@umich.eduTo get out of a state, you use <tt>begin()</tt> to switch back to the initial state.  For example:
12724479Sbinkertn@umich.edu
12734479Sbinkertn@umich.edu<blockquote>
12744479Sbinkertn@umich.edu<pre>
12754479Sbinkertn@umich.edudef t_foo_end(t):
12764479Sbinkertn@umich.edu    r'end_foo'
12774479Sbinkertn@umich.edu    t.lexer.begin('INITIAL')        # Back to the initial state
12784479Sbinkertn@umich.edu</pre>
12794479Sbinkertn@umich.edu</blockquote>
12804479Sbinkertn@umich.edu
12814479Sbinkertn@umich.eduThe management of states can also be done with a stack.  For example:
12824479Sbinkertn@umich.edu
12834479Sbinkertn@umich.edu<blockquote>
12844479Sbinkertn@umich.edu<pre>
12854479Sbinkertn@umich.edudef t_begin_foo(t):
12864479Sbinkertn@umich.edu    r'start_foo'
12874479Sbinkertn@umich.edu    t.lexer.push_state('foo')             # Starts 'foo' state
12884479Sbinkertn@umich.edu
12894479Sbinkertn@umich.edudef t_foo_end(t):
12904479Sbinkertn@umich.edu    r'end_foo'
12914479Sbinkertn@umich.edu    t.lexer.pop_state()                   # Back to the previous state
12924479Sbinkertn@umich.edu</pre>
12934479Sbinkertn@umich.edu</blockquote>
12944479Sbinkertn@umich.edu
12954479Sbinkertn@umich.edu<p>
12964479Sbinkertn@umich.eduThe use of a stack would be useful in situations where there are many ways of entering a new lexing state and you merely want to go back
12974479Sbinkertn@umich.eduto the previous state afterwards.
12984479Sbinkertn@umich.edu
12994479Sbinkertn@umich.edu<P>
13004479Sbinkertn@umich.eduAn example might help clarify.  Suppose you were writing a parser and you wanted to grab sections of arbitrary C code enclosed by
13014479Sbinkertn@umich.educurly braces.  That is, whenever you encounter a starting brace '{', you want to read all of the enclosed code up to the ending brace '}' 
13024479Sbinkertn@umich.eduand return it as a string.   Doing this with a normal regular expression rule is nearly (if not actually) impossible.  This is because braces can
13034479Sbinkertn@umich.edube nested and can be included in comments and strings.  Thus, simply matching up to the first matching '}' character isn't good enough.  Here is how
13044479Sbinkertn@umich.eduyou might use lexer states to do this:
13054479Sbinkertn@umich.edu
13064479Sbinkertn@umich.edu<blockquote>
13074479Sbinkertn@umich.edu<pre>
13084479Sbinkertn@umich.edu# Declare the state
13094479Sbinkertn@umich.edustates = (
13104479Sbinkertn@umich.edu  ('ccode','exclusive'),
13114479Sbinkertn@umich.edu)
13124479Sbinkertn@umich.edu
13134479Sbinkertn@umich.edu# Match the first {. Enter ccode state.
13144479Sbinkertn@umich.edudef t_ccode(t):
13154479Sbinkertn@umich.edu    r'\{'
13164479Sbinkertn@umich.edu    t.lexer.code_start = t.lexer.lexpos        # Record the starting position
13174479Sbinkertn@umich.edu    t.lexer.level = 1                          # Initial brace level
13184479Sbinkertn@umich.edu    t.lexer.begin('ccode')                     # Enter 'ccode' state
13194479Sbinkertn@umich.edu
13204479Sbinkertn@umich.edu# Rules for the ccode state
13214479Sbinkertn@umich.edudef t_ccode_lbrace(t):     
13224479Sbinkertn@umich.edu    r'\{'
13234479Sbinkertn@umich.edu    t.lexer.level +=1                
13244479Sbinkertn@umich.edu
13254479Sbinkertn@umich.edudef t_ccode_rbrace(t):
13264479Sbinkertn@umich.edu    r'\}'
13274479Sbinkertn@umich.edu    t.lexer.level -=1
13284479Sbinkertn@umich.edu
13294479Sbinkertn@umich.edu    # If closing brace, return the code fragment
13304479Sbinkertn@umich.edu    if t.lexer.level == 0:
13314479Sbinkertn@umich.edu         t.value = t.lexer.lexdata[t.lexer.code_start:t.lexer.lexpos+1]
13324479Sbinkertn@umich.edu         t.type = "CCODE"
13334479Sbinkertn@umich.edu         t.lexer.lineno += t.value.count('\n')
13344479Sbinkertn@umich.edu         t.lexer.begin('INITIAL')           
13354479Sbinkertn@umich.edu         return t
13364479Sbinkertn@umich.edu
13374479Sbinkertn@umich.edu# C or C++ comment (ignore)    
13384479Sbinkertn@umich.edudef t_ccode_comment(t):
13394479Sbinkertn@umich.edu    r'(/\*(.|\n)*?*/)|(//.*)'
13404479Sbinkertn@umich.edu    pass
13414479Sbinkertn@umich.edu
13424479Sbinkertn@umich.edu# C string
13434479Sbinkertn@umich.edudef t_ccode_string(t):
13444479Sbinkertn@umich.edu   r'\"([^\\\n]|(\\.))*?\"'
13454479Sbinkertn@umich.edu
13464479Sbinkertn@umich.edu# C character literal
13474479Sbinkertn@umich.edudef t_ccode_char(t):
13484479Sbinkertn@umich.edu   r'\'([^\\\n]|(\\.))*?\''
13494479Sbinkertn@umich.edu
13504479Sbinkertn@umich.edu# Any sequence of non-whitespace characters (not braces, strings)
13514479Sbinkertn@umich.edudef t_ccode_nonspace(t):
13524479Sbinkertn@umich.edu   r'[^\s\{\}\'\"]+'
13534479Sbinkertn@umich.edu
13544479Sbinkertn@umich.edu# Ignored characters (whitespace)
13554479Sbinkertn@umich.edut_ccode_ignore = " \t\n"
13564479Sbinkertn@umich.edu
13574479Sbinkertn@umich.edu# For bad characters, we just skip over it
13584479Sbinkertn@umich.edudef t_ccode_error(t):
13594479Sbinkertn@umich.edu    t.lexer.skip(1)
13604479Sbinkertn@umich.edu</pre>
13614479Sbinkertn@umich.edu</blockquote>
13624479Sbinkertn@umich.edu
13634479Sbinkertn@umich.eduIn this example, the occurrence of the first '{' causes the lexer to record the starting position and enter a new state <tt>'ccode'</tt>.  A collection of rules then match
13644479Sbinkertn@umich.eduvarious parts of the input that follow (comments, strings, etc.).  All of these rules merely discard the token (by not returning a value).
13654479Sbinkertn@umich.eduHowever, if the closing right brace is encountered, the rule <tt>t_ccode_rbrace</tt> collects all of the code (using the earlier recorded starting
13664479Sbinkertn@umich.eduposition), stores it, and returns a token 'CCODE' containing all of that text.  When returning the token, the lexing state is restored back to its
13674479Sbinkertn@umich.eduinitial state.
13684479Sbinkertn@umich.edu
13696498Snate@binkert.org<H3><a name="ply_nn21"></a>4.19 Miscellaneous Issues</H3>
13704479Sbinkertn@umich.edu
13714479Sbinkertn@umich.edu
13724479Sbinkertn@umich.edu<P>
13734479Sbinkertn@umich.edu<li>The lexer requires input to be supplied as a single input string.  Since most machines have more than enough memory, this 
13744479Sbinkertn@umich.edurarely presents a performance concern.  However, it means that the lexer currently can't be used with streaming data
13754479Sbinkertn@umich.edusuch as open files or sockets.  This limitation is primarily a side-effect of using the <tt>re</tt> module.
13764479Sbinkertn@umich.edu
13774479Sbinkertn@umich.edu<p>
13784479Sbinkertn@umich.edu<li>The lexer should work properly with both Unicode strings given as token and pattern matching rules as
13794479Sbinkertn@umich.eduwell as for input text.
13804479Sbinkertn@umich.edu
13814479Sbinkertn@umich.edu<p>
13824479Sbinkertn@umich.edu<li>If you need to supply optional flags to the re.compile() function, use the reflags option to lex.  For example:
13834479Sbinkertn@umich.edu
13844479Sbinkertn@umich.edu<blockquote>
13854479Sbinkertn@umich.edu<pre>
13864479Sbinkertn@umich.edulex.lex(reflags=re.UNICODE)
13874479Sbinkertn@umich.edu</pre>
13884479Sbinkertn@umich.edu</blockquote>
13892632Sstever@eecs.umich.edu
13902632Sstever@eecs.umich.edu<p>
13912632Sstever@eecs.umich.edu<li>Since the lexer is written entirely in Python, its performance is
13922632Sstever@eecs.umich.edulargely determined by that of the Python <tt>re</tt> module.  Although
13932632Sstever@eecs.umich.eduthe lexer has been written to be as efficient as possible, it's not
13944479Sbinkertn@umich.edublazingly fast when used on very large input files.  If
13952632Sstever@eecs.umich.eduperformance is concern, you might consider upgrading to the most
13962632Sstever@eecs.umich.edurecent version of Python, creating a hand-written lexer, or offloading
13974479Sbinkertn@umich.eduthe lexer into a C extension module.  
13984479Sbinkertn@umich.edu
13994479Sbinkertn@umich.edu<p>
14004479Sbinkertn@umich.eduIf you are going to create a hand-written lexer and you plan to use it with <tt>yacc.py</tt>, 
14014479Sbinkertn@umich.eduit only needs to conform to the following requirements:
14024479Sbinkertn@umich.edu
14034479Sbinkertn@umich.edu<ul>
14044479Sbinkertn@umich.edu<li>It must provide a <tt>token()</tt> method that returns the next token or <tt>None</tt> if no more
14054479Sbinkertn@umich.edutokens are available.
14064479Sbinkertn@umich.edu<li>The <tt>token()</tt> method must return an object <tt>tok</tt> that has <tt>type</tt> and <tt>value</tt> attributes.
14072632Sstever@eecs.umich.edu</ul>
14082632Sstever@eecs.umich.edu
14096498Snate@binkert.org<H2><a name="ply_nn22"></a>5. Parsing basics</H2>
14104479Sbinkertn@umich.edu
14112632Sstever@eecs.umich.edu
14122632Sstever@eecs.umich.edu<tt>yacc.py</tt> is used to parse language syntax.  Before showing an
14132632Sstever@eecs.umich.eduexample, there are a few important bits of background that must be
14144479Sbinkertn@umich.edumentioned.  First, <em>syntax</em> is usually specified in terms of a BNF grammar.
14154479Sbinkertn@umich.eduFor example, if you wanted to parse
14162632Sstever@eecs.umich.edusimple arithmetic expressions, you might first write an unambiguous
14172632Sstever@eecs.umich.edugrammar specification like this:
14182632Sstever@eecs.umich.edu
14192632Sstever@eecs.umich.edu<blockquote>
14202632Sstever@eecs.umich.edu<pre> 
14212632Sstever@eecs.umich.eduexpression : expression + term
14222632Sstever@eecs.umich.edu           | expression - term
14232632Sstever@eecs.umich.edu           | term
14242632Sstever@eecs.umich.edu
14252632Sstever@eecs.umich.eduterm       : term * factor
14262632Sstever@eecs.umich.edu           | term / factor
14272632Sstever@eecs.umich.edu           | factor
14282632Sstever@eecs.umich.edu
14292632Sstever@eecs.umich.edufactor     : NUMBER
14302632Sstever@eecs.umich.edu           | ( expression )
14312632Sstever@eecs.umich.edu</pre>
14322632Sstever@eecs.umich.edu</blockquote>
14332632Sstever@eecs.umich.edu
14344479Sbinkertn@umich.eduIn the grammar, symbols such as <tt>NUMBER</tt>, <tt>+</tt>, <tt>-</tt>, <tt>*</tt>, and <tt>/</tt> are known
14356498Snate@binkert.orgas <em>terminals</em> and correspond to raw input tokens.  Identifiers such as <tt>term</tt> and <tt>factor</tt> refer to 
14366498Snate@binkert.orggrammar rules comprised of a collection of terminals and other rules.  These identifiers are known as <em>non-terminals</em>.
14374479Sbinkertn@umich.edu<P>
14386498Snate@binkert.org
14394479Sbinkertn@umich.eduThe semantic behavior of a language is often specified using a
14402632Sstever@eecs.umich.edutechnique known as syntax directed translation.  In syntax directed
14412632Sstever@eecs.umich.edutranslation, attributes are attached to each symbol in a given grammar
14422632Sstever@eecs.umich.edurule along with an action.  Whenever a particular grammar rule is
14432632Sstever@eecs.umich.edurecognized, the action describes what to do.  For example, given the
14442632Sstever@eecs.umich.eduexpression grammar above, you might write the specification for a
14452632Sstever@eecs.umich.edusimple calculator like this:
14462632Sstever@eecs.umich.edu
14472632Sstever@eecs.umich.edu<blockquote>
14482632Sstever@eecs.umich.edu<pre> 
14492632Sstever@eecs.umich.eduGrammar                             Action
14502632Sstever@eecs.umich.edu--------------------------------    -------------------------------------------- 
14512632Sstever@eecs.umich.eduexpression0 : expression1 + term    expression0.val = expression1.val + term.val
14522632Sstever@eecs.umich.edu            | expression1 - term    expression0.val = expression1.val - term.val
14532632Sstever@eecs.umich.edu            | term                  expression0.val = term.val
14542632Sstever@eecs.umich.edu
14552632Sstever@eecs.umich.eduterm0       : term1 * factor        term0.val = term1.val * factor.val
14562632Sstever@eecs.umich.edu            | term1 / factor        term0.val = term1.val / factor.val
14572632Sstever@eecs.umich.edu            | factor                term0.val = factor.val
14582632Sstever@eecs.umich.edu
14592632Sstever@eecs.umich.edufactor      : NUMBER                factor.val = int(NUMBER.lexval)
14602632Sstever@eecs.umich.edu            | ( expression )        factor.val = expression.val
14612632Sstever@eecs.umich.edu</pre>
14622632Sstever@eecs.umich.edu</blockquote>
14632632Sstever@eecs.umich.edu
14646498Snate@binkert.orgA good way to think about syntax directed translation is to 
14656498Snate@binkert.orgview each symbol in the grammar as a kind of object. Associated
14666498Snate@binkert.orgwith each symbol is a value representing its "state" (for example, the
14676498Snate@binkert.org<tt>val</tt> attribute above).    Semantic
14686498Snate@binkert.orgactions are then expressed as a collection of functions or methods
14696498Snate@binkert.orgthat operate on the symbols and associated values.
14704479Sbinkertn@umich.edu
14714479Sbinkertn@umich.edu<p>
14724479Sbinkertn@umich.eduYacc uses a parsing technique known as LR-parsing or shift-reduce parsing.  LR parsing is a
14732632Sstever@eecs.umich.edubottom up technique that tries to recognize the right-hand-side of various grammar rules.
14742632Sstever@eecs.umich.eduWhenever a valid right-hand-side is found in the input, the appropriate action code is triggered and the
14752632Sstever@eecs.umich.edugrammar symbols are replaced by the grammar symbol on the left-hand-side. 
14762632Sstever@eecs.umich.edu
14772632Sstever@eecs.umich.edu<p>
14786498Snate@binkert.orgLR parsing is commonly implemented by shifting grammar symbols onto a
14796498Snate@binkert.orgstack and looking at the stack and the next input token for patterns that
14806498Snate@binkert.orgmatch one of the grammar rules.
14816498Snate@binkert.orgThe details of the algorithm can be found in a compiler textbook, but the
14826498Snate@binkert.orgfollowing example illustrates the steps that are performed if you
14836498Snate@binkert.orgwanted to parse the expression
14846498Snate@binkert.org<tt>3 + 5 * (10 - 20)</tt> using the grammar defined above.  In the example,
14856498Snate@binkert.orgthe special symbol <tt>$</tt> represents the end of input.
14866498Snate@binkert.org
14872632Sstever@eecs.umich.edu
14882632Sstever@eecs.umich.edu<blockquote>
14892632Sstever@eecs.umich.edu<pre>
14902632Sstever@eecs.umich.eduStep Symbol Stack           Input Tokens            Action
14912632Sstever@eecs.umich.edu---- ---------------------  ---------------------   -------------------------------
14926498Snate@binkert.org1                           3 + 5 * ( 10 - 20 )$    Shift 3
14936498Snate@binkert.org2    3                        + 5 * ( 10 - 20 )$    Reduce factor : NUMBER
14946498Snate@binkert.org3    factor                   + 5 * ( 10 - 20 )$    Reduce term   : factor
14956498Snate@binkert.org4    term                     + 5 * ( 10 - 20 )$    Reduce expr : term
14966498Snate@binkert.org5    expr                     + 5 * ( 10 - 20 )$    Shift +
14976498Snate@binkert.org6    expr +                     5 * ( 10 - 20 )$    Shift 5
14986498Snate@binkert.org7    expr + 5                     * ( 10 - 20 )$    Reduce factor : NUMBER
14996498Snate@binkert.org8    expr + factor                * ( 10 - 20 )$    Reduce term   : factor
15006498Snate@binkert.org9    expr + term                  * ( 10 - 20 )$    Shift *
15016498Snate@binkert.org10   expr + term *                  ( 10 - 20 )$    Shift (
15026498Snate@binkert.org11   expr + term * (                  10 - 20 )$    Shift 10
15036498Snate@binkert.org12   expr + term * ( 10                  - 20 )$    Reduce factor : NUMBER
15046498Snate@binkert.org13   expr + term * ( factor              - 20 )$    Reduce term : factor
15056498Snate@binkert.org14   expr + term * ( term                - 20 )$    Reduce expr : term
15066498Snate@binkert.org15   expr + term * ( expr                - 20 )$    Shift -
15076498Snate@binkert.org16   expr + term * ( expr -                20 )$    Shift 20
15086498Snate@binkert.org17   expr + term * ( expr - 20                )$    Reduce factor : NUMBER
15096498Snate@binkert.org18   expr + term * ( expr - factor            )$    Reduce term : factor
15106498Snate@binkert.org19   expr + term * ( expr - term              )$    Reduce expr : expr - term
15116498Snate@binkert.org20   expr + term * ( expr                     )$    Shift )
15126498Snate@binkert.org21   expr + term * ( expr )                    $    Reduce factor : (expr)
15136498Snate@binkert.org22   expr + term * factor                      $    Reduce term : term * factor
15146498Snate@binkert.org23   expr + term                               $    Reduce expr : expr + term
15156498Snate@binkert.org24   expr                                      $    Reduce expr
15166498Snate@binkert.org25                                             $    Success!
15172632Sstever@eecs.umich.edu</pre>
15182632Sstever@eecs.umich.edu</blockquote>
15192632Sstever@eecs.umich.edu
15206498Snate@binkert.orgWhen parsing the expression, an underlying state machine and the
15216498Snate@binkert.orgcurrent input token determine what happens next.  If the next token
15226498Snate@binkert.orglooks like part of a valid grammar rule (based on other items on the
15236498Snate@binkert.orgstack), it is generally shifted onto the stack.  If the top of the
15246498Snate@binkert.orgstack contains a valid right-hand-side of a grammar rule, it is
15256498Snate@binkert.orgusually "reduced" and the symbols replaced with the symbol on the
15266498Snate@binkert.orgleft-hand-side.  When this reduction occurs, the appropriate action is
15276498Snate@binkert.orgtriggered (if defined).  If the input token can't be shifted and the
15286498Snate@binkert.orgtop of stack doesn't match any grammar rules, a syntax error has
15296498Snate@binkert.orgoccurred and the parser must take some kind of recovery step (or bail
15306498Snate@binkert.orgout).  A parse is only successful if the parser reaches a state where
15316498Snate@binkert.orgthe symbol stack is empty and there are no more input tokens.
15322632Sstever@eecs.umich.edu
15332632Sstever@eecs.umich.edu<p>
15346498Snate@binkert.orgIt is important to note that the underlying implementation is built
15356498Snate@binkert.orgaround a large finite-state machine that is encoded in a collection of
15366498Snate@binkert.orgtables. The construction of these tables is non-trivial and
15376498Snate@binkert.orgbeyond the scope of this discussion.  However, subtle details of this
15386498Snate@binkert.orgprocess explain why, in the example above, the parser chooses to shift
15396498Snate@binkert.orga token onto the stack in step 9 rather than reducing the
15406498Snate@binkert.orgrule <tt>expr : expr + term</tt>.
15416498Snate@binkert.org
15426498Snate@binkert.org<H2><a name="ply_nn23"></a>6. Yacc</H2>
15436498Snate@binkert.org
15446498Snate@binkert.org
15456498Snate@binkert.orgThe <tt>ply.yacc</tt> module implements the parsing component of PLY.
15466498Snate@binkert.orgThe name "yacc" stands for "Yet Another Compiler Compiler" and is
15476498Snate@binkert.orgborrowed from the Unix tool of the same name.
15486498Snate@binkert.org
15496498Snate@binkert.org<H3><a name="ply_nn24"></a>6.1 An example</H3>
15504479Sbinkertn@umich.edu
15512632Sstever@eecs.umich.edu
15522632Sstever@eecs.umich.eduSuppose you wanted to make a grammar for simple arithmetic expressions as previously described.   Here is
15532632Sstever@eecs.umich.eduhow you would do it with <tt>yacc.py</tt>:
15542632Sstever@eecs.umich.edu
15552632Sstever@eecs.umich.edu<blockquote>
15562632Sstever@eecs.umich.edu<pre>
15572632Sstever@eecs.umich.edu# Yacc example
15582632Sstever@eecs.umich.edu
15594479Sbinkertn@umich.eduimport ply.yacc as yacc
15602632Sstever@eecs.umich.edu
15612632Sstever@eecs.umich.edu# Get the token map from the lexer.  This is required.
15622632Sstever@eecs.umich.edufrom calclex import tokens
15632632Sstever@eecs.umich.edu
15644479Sbinkertn@umich.edudef p_expression_plus(p):
15652632Sstever@eecs.umich.edu    'expression : expression PLUS term'
15664479Sbinkertn@umich.edu    p[0] = p[1] + p[3]
15674479Sbinkertn@umich.edu
15684479Sbinkertn@umich.edudef p_expression_minus(p):
15692632Sstever@eecs.umich.edu    'expression : expression MINUS term'
15704479Sbinkertn@umich.edu    p[0] = p[1] - p[3]
15714479Sbinkertn@umich.edu
15724479Sbinkertn@umich.edudef p_expression_term(p):
15732632Sstever@eecs.umich.edu    'expression : term'
15744479Sbinkertn@umich.edu    p[0] = p[1]
15754479Sbinkertn@umich.edu
15764479Sbinkertn@umich.edudef p_term_times(p):
15772632Sstever@eecs.umich.edu    'term : term TIMES factor'
15784479Sbinkertn@umich.edu    p[0] = p[1] * p[3]
15794479Sbinkertn@umich.edu
15804479Sbinkertn@umich.edudef p_term_div(p):
15812632Sstever@eecs.umich.edu    'term : term DIVIDE factor'
15824479Sbinkertn@umich.edu    p[0] = p[1] / p[3]
15834479Sbinkertn@umich.edu
15844479Sbinkertn@umich.edudef p_term_factor(p):
15852632Sstever@eecs.umich.edu    'term : factor'
15864479Sbinkertn@umich.edu    p[0] = p[1]
15874479Sbinkertn@umich.edu
15884479Sbinkertn@umich.edudef p_factor_num(p):
15892632Sstever@eecs.umich.edu    'factor : NUMBER'
15904479Sbinkertn@umich.edu    p[0] = p[1]
15914479Sbinkertn@umich.edu
15924479Sbinkertn@umich.edudef p_factor_expr(p):
15932632Sstever@eecs.umich.edu    'factor : LPAREN expression RPAREN'
15944479Sbinkertn@umich.edu    p[0] = p[2]
15952632Sstever@eecs.umich.edu
15962632Sstever@eecs.umich.edu# Error rule for syntax errors
15974479Sbinkertn@umich.edudef p_error(p):
15982632Sstever@eecs.umich.edu    print "Syntax error in input!"
15992632Sstever@eecs.umich.edu
16002632Sstever@eecs.umich.edu# Build the parser
16016498Snate@binkert.orgparser = yacc.yacc()
16026498Snate@binkert.org
16036498Snate@binkert.orgwhile True:
16042632Sstever@eecs.umich.edu   try:
16052632Sstever@eecs.umich.edu       s = raw_input('calc > ')
16062632Sstever@eecs.umich.edu   except EOFError:
16072632Sstever@eecs.umich.edu       break
16082632Sstever@eecs.umich.edu   if not s: continue
16096498Snate@binkert.org   result = parser.parse(s)
16102632Sstever@eecs.umich.edu   print result
16112632Sstever@eecs.umich.edu</pre>
16122632Sstever@eecs.umich.edu</blockquote>
16132632Sstever@eecs.umich.edu
16146498Snate@binkert.orgIn this example, each grammar rule is defined by a Python function
16156498Snate@binkert.orgwhere the docstring to that function contains the appropriate
16166498Snate@binkert.orgcontext-free grammar specification.  The statements that make up the
16176498Snate@binkert.orgfunction body implement the semantic actions of the rule. Each function
16186498Snate@binkert.orgaccepts a single argument <tt>p</tt> that is a sequence containing the
16196498Snate@binkert.orgvalues of each grammar symbol in the corresponding rule.  The values
16206498Snate@binkert.orgof <tt>p[i]</tt> are mapped to grammar symbols as shown here:
16212632Sstever@eecs.umich.edu
16222632Sstever@eecs.umich.edu<blockquote>
16232632Sstever@eecs.umich.edu<pre>
16244479Sbinkertn@umich.edudef p_expression_plus(p):
16252632Sstever@eecs.umich.edu    'expression : expression PLUS term'
16262632Sstever@eecs.umich.edu    #   ^            ^        ^    ^
16274479Sbinkertn@umich.edu    #  p[0]         p[1]     p[2] p[3]
16284479Sbinkertn@umich.edu
16294479Sbinkertn@umich.edu    p[0] = p[1] + p[3]
16302632Sstever@eecs.umich.edu</pre>
16312632Sstever@eecs.umich.edu</blockquote>
16322632Sstever@eecs.umich.edu
16336498Snate@binkert.org<p>
16344479Sbinkertn@umich.eduFor tokens, the "value" of the corresponding <tt>p[i]</tt> is the
16356498Snate@binkert.org<em>same</em> as the <tt>p.value</tt> attribute assigned in the lexer
16366498Snate@binkert.orgmodule.  For non-terminals, the value is determined by whatever is
16376498Snate@binkert.orgplaced in <tt>p[0]</tt> when rules are reduced.  This value can be
16386498Snate@binkert.organything at all.  However, it probably most common for the value to be
16396498Snate@binkert.orga simple Python type, a tuple, or an instance.  In this example, we
16406498Snate@binkert.orgare relying on the fact that the <tt>NUMBER</tt> token stores an
16416498Snate@binkert.orginteger value in its value field.  All of the other rules simply
16426498Snate@binkert.orgperform various types of integer operations and propagate the result.
16436498Snate@binkert.org</p>
16444479Sbinkertn@umich.edu
16452632Sstever@eecs.umich.edu<p>
16466498Snate@binkert.orgNote: The use of negative indices have a special meaning in
16476498Snate@binkert.orgyacc---specially <tt>p[-1]</tt> does not have the same value
16486498Snate@binkert.orgas <tt>p[3]</tt> in this example.  Please see the section on "Embedded
16496498Snate@binkert.orgActions" for further details.
16506498Snate@binkert.org</p>
16516498Snate@binkert.org
16526498Snate@binkert.org<p>
16536498Snate@binkert.orgThe first rule defined in the yacc specification determines the
16546498Snate@binkert.orgstarting grammar symbol (in this case, a rule for <tt>expression</tt>
16556498Snate@binkert.orgappears first).  Whenever the starting rule is reduced by the parser
16566498Snate@binkert.organd no more input is available, parsing stops and the final value is
16576498Snate@binkert.orgreturned (this value will be whatever the top-most rule placed
16586498Snate@binkert.orgin <tt>p[0]</tt>). Note: an alternative starting symbol can be
16596498Snate@binkert.orgspecified using the <tt>start</tt> keyword argument to
16604479Sbinkertn@umich.edu<tt>yacc()</tt>.
16614479Sbinkertn@umich.edu
16626498Snate@binkert.org<p>The <tt>p_error(p)</tt> rule is defined to catch syntax errors.
16636498Snate@binkert.orgSee the error handling section below for more detail.
16642632Sstever@eecs.umich.edu
16652632Sstever@eecs.umich.edu<p>
16666498Snate@binkert.orgTo build the parser, call the <tt>yacc.yacc()</tt> function.  This
16676498Snate@binkert.orgfunction looks at the module and attempts to construct all of the LR
16686498Snate@binkert.orgparsing tables for the grammar you have specified.  The first
16696498Snate@binkert.orgtime <tt>yacc.yacc()</tt> is invoked, you will get a message such as
16706498Snate@binkert.orgthis:
16712632Sstever@eecs.umich.edu
16722632Sstever@eecs.umich.edu<blockquote>
16732632Sstever@eecs.umich.edu<pre>
16742632Sstever@eecs.umich.edu$ python calcparse.py
16756498Snate@binkert.orgGenerating LALR tables
16762632Sstever@eecs.umich.educalc > 
16772632Sstever@eecs.umich.edu</pre>
16782632Sstever@eecs.umich.edu</blockquote>
16792632Sstever@eecs.umich.edu
16802632Sstever@eecs.umich.eduSince table construction is relatively expensive (especially for large
16812632Sstever@eecs.umich.edugrammars), the resulting parsing table is written to the current
16822632Sstever@eecs.umich.edudirectory in a file called <tt>parsetab.py</tt>.  In addition, a
16832632Sstever@eecs.umich.edudebugging file called <tt>parser.out</tt> is created.  On subsequent
16842632Sstever@eecs.umich.eduexecutions, <tt>yacc</tt> will reload the table from
16852632Sstever@eecs.umich.edu<tt>parsetab.py</tt> unless it has detected a change in the underlying
16862632Sstever@eecs.umich.edugrammar (in which case the tables and <tt>parsetab.py</tt> file are
16876498Snate@binkert.orgregenerated).  Note: The names of parser output files can be changed
16886498Snate@binkert.orgif necessary.  See the <a href="reference.html">PLY Reference</a> for details.
16892632Sstever@eecs.umich.edu
16902632Sstever@eecs.umich.edu<p>
16912632Sstever@eecs.umich.eduIf any errors are detected in your grammar specification, <tt>yacc.py</tt> will produce
16922632Sstever@eecs.umich.edudiagnostic messages and possibly raise an exception.  Some of the errors that can be detected include:
16932632Sstever@eecs.umich.edu
16942632Sstever@eecs.umich.edu<ul>
16952632Sstever@eecs.umich.edu<li>Duplicated function names (if more than one rule function have the same name in the grammar file).
16962632Sstever@eecs.umich.edu<li>Shift/reduce and reduce/reduce conflicts generated by ambiguous grammars.
16972632Sstever@eecs.umich.edu<li>Badly specified grammar rules.
16982632Sstever@eecs.umich.edu<li>Infinite recursion (rules that can never terminate).
16992632Sstever@eecs.umich.edu<li>Unused rules and tokens
17002632Sstever@eecs.umich.edu<li>Undefined rules and tokens
17012632Sstever@eecs.umich.edu</ul>
17022632Sstever@eecs.umich.edu
17036498Snate@binkert.orgThe next few sections discuss grammar specification in more detail.
17046498Snate@binkert.org
17056498Snate@binkert.org<p>
17066498Snate@binkert.orgThe final part of the example shows how to actually run the parser
17076498Snate@binkert.orgcreated by
17086498Snate@binkert.org<tt>yacc()</tt>.  To run the parser, you simply have to call
17096498Snate@binkert.orgthe <tt>parse()</tt> with a string of input text.  This will run all
17106498Snate@binkert.orgof the grammar rules and return the result of the entire parse.  This
17116498Snate@binkert.orgresult return is the value assigned to <tt>p[0]</tt> in the starting
17126498Snate@binkert.orggrammar rule.
17136498Snate@binkert.org
17146498Snate@binkert.org<H3><a name="ply_nn25"></a>6.2 Combining Grammar Rule Functions</H3>
17154479Sbinkertn@umich.edu
17162632Sstever@eecs.umich.edu
17172632Sstever@eecs.umich.eduWhen grammar rules are similar, they can be combined into a single function.
17182632Sstever@eecs.umich.eduFor example, consider the two rules in our earlier example:
17192632Sstever@eecs.umich.edu
17202632Sstever@eecs.umich.edu<blockquote>
17212632Sstever@eecs.umich.edu<pre>
17224479Sbinkertn@umich.edudef p_expression_plus(p):
17232632Sstever@eecs.umich.edu    'expression : expression PLUS term'
17244479Sbinkertn@umich.edu    p[0] = p[1] + p[3]
17252632Sstever@eecs.umich.edu
17262632Sstever@eecs.umich.edudef p_expression_minus(t):
17272632Sstever@eecs.umich.edu    'expression : expression MINUS term'
17284479Sbinkertn@umich.edu    p[0] = p[1] - p[3]
17292632Sstever@eecs.umich.edu</pre>
17302632Sstever@eecs.umich.edu</blockquote>
17312632Sstever@eecs.umich.edu
17322632Sstever@eecs.umich.eduInstead of writing two functions, you might write a single function like this:
17332632Sstever@eecs.umich.edu
17342632Sstever@eecs.umich.edu<blockquote>
17352632Sstever@eecs.umich.edu<pre>
17364479Sbinkertn@umich.edudef p_expression(p):
17372632Sstever@eecs.umich.edu    '''expression : expression PLUS term
17382632Sstever@eecs.umich.edu                  | expression MINUS term'''
17394479Sbinkertn@umich.edu    if p[2] == '+':
17404479Sbinkertn@umich.edu        p[0] = p[1] + p[3]
17414479Sbinkertn@umich.edu    elif p[2] == '-':
17424479Sbinkertn@umich.edu        p[0] = p[1] - p[3]
17432632Sstever@eecs.umich.edu</pre>
17442632Sstever@eecs.umich.edu</blockquote>
17452632Sstever@eecs.umich.edu
17462632Sstever@eecs.umich.eduIn general, the doc string for any given function can contain multiple grammar rules.  So, it would
17472632Sstever@eecs.umich.eduhave also been legal (although possibly confusing) to write this:
17482632Sstever@eecs.umich.edu
17492632Sstever@eecs.umich.edu<blockquote>
17502632Sstever@eecs.umich.edu<pre>
17514479Sbinkertn@umich.edudef p_binary_operators(p):
17522632Sstever@eecs.umich.edu    '''expression : expression PLUS term
17532632Sstever@eecs.umich.edu                  | expression MINUS term
17542632Sstever@eecs.umich.edu       term       : term TIMES factor
17552632Sstever@eecs.umich.edu                  | term DIVIDE factor'''
17564479Sbinkertn@umich.edu    if p[2] == '+':
17574479Sbinkertn@umich.edu        p[0] = p[1] + p[3]
17584479Sbinkertn@umich.edu    elif p[2] == '-':
17594479Sbinkertn@umich.edu        p[0] = p[1] - p[3]
17604479Sbinkertn@umich.edu    elif p[2] == '*':
17614479Sbinkertn@umich.edu        p[0] = p[1] * p[3]
17624479Sbinkertn@umich.edu    elif p[2] == '/':
17634479Sbinkertn@umich.edu        p[0] = p[1] / p[3]
17642632Sstever@eecs.umich.edu</pre>
17652632Sstever@eecs.umich.edu</blockquote>
17662632Sstever@eecs.umich.edu
17672632Sstever@eecs.umich.eduWhen combining grammar rules into a single function, it is usually a good idea for all of the rules to have
17682632Sstever@eecs.umich.edua similar structure (e.g., the same number of terms).  Otherwise, the corresponding action code may be more 
17694479Sbinkertn@umich.educomplicated than necessary.  However, it is possible to handle simple cases using len().  For example:
17702632Sstever@eecs.umich.edu
17712632Sstever@eecs.umich.edu<blockquote>
17722632Sstever@eecs.umich.edu<pre>
17734479Sbinkertn@umich.edudef p_expressions(p):
17744479Sbinkertn@umich.edu    '''expression : expression MINUS expression
17754479Sbinkertn@umich.edu                  | MINUS expression'''
17764479Sbinkertn@umich.edu    if (len(p) == 4):
17774479Sbinkertn@umich.edu        p[0] = p[1] - p[3]
17784479Sbinkertn@umich.edu    elif (len(p) == 3):
17794479Sbinkertn@umich.edu        p[0] = -p[2]
17804479Sbinkertn@umich.edu</pre>
17814479Sbinkertn@umich.edu</blockquote>
17824479Sbinkertn@umich.edu
17836498Snate@binkert.orgIf parsing performance is a concern, you should resist the urge to put
17846498Snate@binkert.orgtoo much conditional processing into a single grammar rule as shown in
17856498Snate@binkert.orgthese examples.  When you add checks to see which grammar rule is
17866498Snate@binkert.orgbeing handled, you are actually duplicating the work that the parser
17876498Snate@binkert.orghas already performed (i.e., the parser already knows exactly what rule it
17886498Snate@binkert.orgmatched).  You can eliminate this overhead by using a
17896498Snate@binkert.orgseparate <tt>p_rule()</tt> function for each grammar rule.
17906498Snate@binkert.org
17916498Snate@binkert.org<H3><a name="ply_nn26"></a>6.3 Character Literals</H3>
17924479Sbinkertn@umich.edu
17934479Sbinkertn@umich.edu
17944479Sbinkertn@umich.eduIf desired, a grammar may contain tokens defined as single character literals.   For example:
17954479Sbinkertn@umich.edu
17964479Sbinkertn@umich.edu<blockquote>
17974479Sbinkertn@umich.edu<pre>
17984479Sbinkertn@umich.edudef p_binary_operators(p):
17994479Sbinkertn@umich.edu    '''expression : expression '+' term
18004479Sbinkertn@umich.edu                  | expression '-' term
18014479Sbinkertn@umich.edu       term       : term '*' factor
18024479Sbinkertn@umich.edu                  | term '/' factor'''
18034479Sbinkertn@umich.edu    if p[2] == '+':
18044479Sbinkertn@umich.edu        p[0] = p[1] + p[3]
18054479Sbinkertn@umich.edu    elif p[2] == '-':
18064479Sbinkertn@umich.edu        p[0] = p[1] - p[3]
18074479Sbinkertn@umich.edu    elif p[2] == '*':
18084479Sbinkertn@umich.edu        p[0] = p[1] * p[3]
18094479Sbinkertn@umich.edu    elif p[2] == '/':
18104479Sbinkertn@umich.edu        p[0] = p[1] / p[3]
18114479Sbinkertn@umich.edu</pre>
18124479Sbinkertn@umich.edu</blockquote>
18134479Sbinkertn@umich.edu
18144479Sbinkertn@umich.eduA character literal must be enclosed in quotes such as <tt>'+'</tt>.  In addition, if literals are used, they must be declared in the
18154479Sbinkertn@umich.educorresponding <tt>lex</tt> file through the use of a special <tt>literals</tt> declaration.
18164479Sbinkertn@umich.edu
18174479Sbinkertn@umich.edu<blockquote>
18184479Sbinkertn@umich.edu<pre>
18194479Sbinkertn@umich.edu# Literals.  Should be placed in module given to lex()
18204479Sbinkertn@umich.eduliterals = ['+','-','*','/' ]
18214479Sbinkertn@umich.edu</pre>
18224479Sbinkertn@umich.edu</blockquote>
18234479Sbinkertn@umich.edu
18244479Sbinkertn@umich.edu<b>Character literals are limited to a single character</b>.  Thus, it is not legal to specify literals such as <tt>'&lt;='</tt> or <tt>'=='</tt>.  For this, use
18254479Sbinkertn@umich.eduthe normal lexing rules (e.g., define a rule such as <tt>t_EQ = r'=='</tt>).
18264479Sbinkertn@umich.edu
18276498Snate@binkert.org<H3><a name="ply_nn26"></a>6.4 Empty Productions</H3>
18284479Sbinkertn@umich.edu
18294479Sbinkertn@umich.edu
18304479Sbinkertn@umich.edu<tt>yacc.py</tt> can handle empty productions by defining a rule like this:
18314479Sbinkertn@umich.edu
18324479Sbinkertn@umich.edu<blockquote>
18334479Sbinkertn@umich.edu<pre>
18344479Sbinkertn@umich.edudef p_empty(p):
18352632Sstever@eecs.umich.edu    'empty :'
18362632Sstever@eecs.umich.edu    pass
18372632Sstever@eecs.umich.edu</pre>
18382632Sstever@eecs.umich.edu</blockquote>
18392632Sstever@eecs.umich.edu
18402632Sstever@eecs.umich.eduNow to use the empty production, simply use 'empty' as a symbol.  For example:
18412632Sstever@eecs.umich.edu
18422632Sstever@eecs.umich.edu<blockquote>
18432632Sstever@eecs.umich.edu<pre>
18444479Sbinkertn@umich.edudef p_optitem(p):
18452632Sstever@eecs.umich.edu    'optitem : item'
18462632Sstever@eecs.umich.edu    '        | empty'
18472632Sstever@eecs.umich.edu    ...
18482632Sstever@eecs.umich.edu</pre>
18492632Sstever@eecs.umich.edu</blockquote>
18502632Sstever@eecs.umich.edu
18516498Snate@binkert.orgNote: You can write empty rules anywhere by simply specifying an empty
18526498Snate@binkert.orgright hand side.  However, I personally find that writing an "empty"
18536498Snate@binkert.orgrule and using "empty" to denote an empty production is easier to read
18546498Snate@binkert.organd more clearly states your intentions.
18556498Snate@binkert.org
18566498Snate@binkert.org<H3><a name="ply_nn28"></a>6.5 Changing the starting symbol</H3>
18574479Sbinkertn@umich.edu
18584479Sbinkertn@umich.edu
18594479Sbinkertn@umich.eduNormally, the first rule found in a yacc specification defines the starting grammar rule (top level rule).  To change this, simply
18604479Sbinkertn@umich.edusupply a <tt>start</tt> specifier in your file.  For example:
18614479Sbinkertn@umich.edu
18624479Sbinkertn@umich.edu<blockquote>
18634479Sbinkertn@umich.edu<pre>
18644479Sbinkertn@umich.edustart = 'foo'
18654479Sbinkertn@umich.edu
18664479Sbinkertn@umich.edudef p_bar(p):
18674479Sbinkertn@umich.edu    'bar : A B'
18684479Sbinkertn@umich.edu
18694479Sbinkertn@umich.edu# This is the starting rule due to the start specifier above
18704479Sbinkertn@umich.edudef p_foo(p):
18714479Sbinkertn@umich.edu    'foo : bar X'
18724479Sbinkertn@umich.edu...
18734479Sbinkertn@umich.edu</pre>
18744479Sbinkertn@umich.edu</blockquote>
18754479Sbinkertn@umich.edu
18766498Snate@binkert.orgThe use of a <tt>start</tt> specifier may be useful during debugging
18776498Snate@binkert.orgsince you can use it to have yacc build a subset of a larger grammar.
18786498Snate@binkert.orgFor this purpose, it is also possible to specify a starting symbol as
18796498Snate@binkert.organ argument to <tt>yacc()</tt>. For example:
18804479Sbinkertn@umich.edu
18814479Sbinkertn@umich.edu<blockquote>
18824479Sbinkertn@umich.edu<pre>
18834479Sbinkertn@umich.eduyacc.yacc(start='foo')
18844479Sbinkertn@umich.edu</pre>
18854479Sbinkertn@umich.edu</blockquote>
18864479Sbinkertn@umich.edu
18876498Snate@binkert.org<H3><a name="ply_nn27"></a>6.6 Dealing With Ambiguous Grammars</H3>
18886498Snate@binkert.org
18896498Snate@binkert.org
18906498Snate@binkert.orgThe expression grammar given in the earlier example has been written
18916498Snate@binkert.orgin a special format to eliminate ambiguity.  However, in many
18926498Snate@binkert.orgsituations, it is extremely difficult or awkward to write grammars in
18936498Snate@binkert.orgthis format.  A much more natural way to express the grammar is in a
18946498Snate@binkert.orgmore compact form like this:
18952632Sstever@eecs.umich.edu
18962632Sstever@eecs.umich.edu<blockquote>
18972632Sstever@eecs.umich.edu<pre>
18982632Sstever@eecs.umich.eduexpression : expression PLUS expression
18992632Sstever@eecs.umich.edu           | expression MINUS expression
19002632Sstever@eecs.umich.edu           | expression TIMES expression
19012632Sstever@eecs.umich.edu           | expression DIVIDE expression
19022632Sstever@eecs.umich.edu           | LPAREN expression RPAREN
19032632Sstever@eecs.umich.edu           | NUMBER
19042632Sstever@eecs.umich.edu</pre>
19052632Sstever@eecs.umich.edu</blockquote>
19062632Sstever@eecs.umich.edu
19076498Snate@binkert.orgUnfortunately, this grammar specification is ambiguous.  For example,
19086498Snate@binkert.orgif you are parsing the string "3 * 4 + 5", there is no way to tell how
19096498Snate@binkert.orgthe operators are supposed to be grouped.  For example, does the
19106498Snate@binkert.orgexpression mean "(3 * 4) + 5" or is it "3 * (4+5)"?
19112632Sstever@eecs.umich.edu
19122632Sstever@eecs.umich.edu<p>
19136498Snate@binkert.orgWhen an ambiguous grammar is given to <tt>yacc.py</tt> it will print
19146498Snate@binkert.orgmessages about "shift/reduce conflicts" or "reduce/reduce conflicts".
19156498Snate@binkert.orgA shift/reduce conflict is caused when the parser generator can't
19166498Snate@binkert.orgdecide whether or not to reduce a rule or shift a symbol on the
19176498Snate@binkert.orgparsing stack.  For example, consider the string "3 * 4 + 5" and the
19186498Snate@binkert.orginternal parsing stack:
19192632Sstever@eecs.umich.edu
19202632Sstever@eecs.umich.edu<blockquote>
19212632Sstever@eecs.umich.edu<pre>
19222632Sstever@eecs.umich.eduStep Symbol Stack           Input Tokens            Action
19232632Sstever@eecs.umich.edu---- ---------------------  ---------------------   -------------------------------
19242632Sstever@eecs.umich.edu1    $                                3 * 4 + 5$    Shift 3
19252632Sstever@eecs.umich.edu2    $ 3                                * 4 + 5$    Reduce : expression : NUMBER
19262632Sstever@eecs.umich.edu3    $ expr                             * 4 + 5$    Shift *
19272632Sstever@eecs.umich.edu4    $ expr *                             4 + 5$    Shift 4
19282632Sstever@eecs.umich.edu5    $ expr * 4                             + 5$    Reduce: expression : NUMBER
19292632Sstever@eecs.umich.edu6    $ expr * expr                          + 5$    SHIFT/REDUCE CONFLICT ????
19302632Sstever@eecs.umich.edu</pre>
19312632Sstever@eecs.umich.edu</blockquote>
19322632Sstever@eecs.umich.edu
19336498Snate@binkert.orgIn this case, when the parser reaches step 6, it has two options.  One
19346498Snate@binkert.orgis to reduce the rule <tt>expr : expr * expr</tt> on the stack.  The
19356498Snate@binkert.orgother option is to shift the token <tt>+</tt> on the stack.  Both
19366498Snate@binkert.orgoptions are perfectly legal from the rules of the
19376498Snate@binkert.orgcontext-free-grammar.
19382632Sstever@eecs.umich.edu
19392632Sstever@eecs.umich.edu<p>
19406498Snate@binkert.orgBy default, all shift/reduce conflicts are resolved in favor of
19416498Snate@binkert.orgshifting.  Therefore, in the above example, the parser will always
19426498Snate@binkert.orgshift the <tt>+</tt> instead of reducing.  Although this strategy
19436498Snate@binkert.orgworks in many cases (for example, the case of 
19446498Snate@binkert.org"if-then" versus "if-then-else"), it is not enough for arithmetic expressions.  In fact,
19456498Snate@binkert.orgin the above example, the decision to shift <tt>+</tt> is completely
19466498Snate@binkert.orgwrong---we should have reduced <tt>expr * expr</tt> since
19476498Snate@binkert.orgmultiplication has higher mathematical precedence than addition.
19486498Snate@binkert.org
19496498Snate@binkert.org<p>To resolve ambiguity, especially in expression
19506498Snate@binkert.orggrammars, <tt>yacc.py</tt> allows individual tokens to be assigned a
19516498Snate@binkert.orgprecedence level and associativity.  This is done by adding a variable
19522632Sstever@eecs.umich.edu<tt>precedence</tt> to the grammar file like this:
19532632Sstever@eecs.umich.edu
19542632Sstever@eecs.umich.edu<blockquote>
19552632Sstever@eecs.umich.edu<pre>
19562632Sstever@eecs.umich.eduprecedence = (
19572632Sstever@eecs.umich.edu    ('left', 'PLUS', 'MINUS'),
19582632Sstever@eecs.umich.edu    ('left', 'TIMES', 'DIVIDE'),
19592632Sstever@eecs.umich.edu)
19602632Sstever@eecs.umich.edu</pre>
19612632Sstever@eecs.umich.edu</blockquote>
19622632Sstever@eecs.umich.edu
19636498Snate@binkert.orgThis declaration specifies that <tt>PLUS</tt>/<tt>MINUS</tt> have the
19646498Snate@binkert.orgsame precedence level and are left-associative and that
19656498Snate@binkert.org<tt>TIMES</tt>/<tt>DIVIDE</tt> have the same precedence and are
19666498Snate@binkert.orgleft-associative.  Within the <tt>precedence</tt> declaration, tokens
19676498Snate@binkert.orgare ordered from lowest to highest precedence. Thus, this declaration
19686498Snate@binkert.orgspecifies that <tt>TIMES</tt>/<tt>DIVIDE</tt> have higher precedence
19696498Snate@binkert.orgthan <tt>PLUS</tt>/<tt>MINUS</tt> (since they appear later in the
19702632Sstever@eecs.umich.eduprecedence specification).
19712632Sstever@eecs.umich.edu
19722632Sstever@eecs.umich.edu<p>
19736498Snate@binkert.orgThe precedence specification works by associating a numerical
19746498Snate@binkert.orgprecedence level value and associativity direction to the listed
19756498Snate@binkert.orgtokens.  For example, in the above example you get:
19762632Sstever@eecs.umich.edu
19772632Sstever@eecs.umich.edu<blockquote>
19782632Sstever@eecs.umich.edu<pre>
19794479Sbinkertn@umich.eduPLUS      : level = 1,  assoc = 'left'
19804479Sbinkertn@umich.eduMINUS     : level = 1,  assoc = 'left'
19814479Sbinkertn@umich.eduTIMES     : level = 2,  assoc = 'left'
19824479Sbinkertn@umich.eduDIVIDE    : level = 2,  assoc = 'left'
19834479Sbinkertn@umich.edu</pre>
19844479Sbinkertn@umich.edu</blockquote>
19854479Sbinkertn@umich.edu
19866498Snate@binkert.orgThese values are then used to attach a numerical precedence value and
19876498Snate@binkert.orgassociativity direction to each grammar rule. <em>This is always
19886498Snate@binkert.orgdetermined by looking at the precedence of the right-most terminal
19896498Snate@binkert.orgsymbol.</em>  For example:
19904479Sbinkertn@umich.edu
19914479Sbinkertn@umich.edu<blockquote>
19924479Sbinkertn@umich.edu<pre>
19934479Sbinkertn@umich.eduexpression : expression PLUS expression                 # level = 1, left
19944479Sbinkertn@umich.edu           | expression MINUS expression                # level = 1, left
19954479Sbinkertn@umich.edu           | expression TIMES expression                # level = 2, left
19964479Sbinkertn@umich.edu           | expression DIVIDE expression               # level = 2, left
19974479Sbinkertn@umich.edu           | LPAREN expression RPAREN                   # level = None (not specified)
19984479Sbinkertn@umich.edu           | NUMBER                                     # level = None (not specified)
19992632Sstever@eecs.umich.edu</pre>
20002632Sstever@eecs.umich.edu</blockquote>
20012632Sstever@eecs.umich.edu
20022632Sstever@eecs.umich.eduWhen shift/reduce conflicts are encountered, the parser generator resolves the conflict by
20032632Sstever@eecs.umich.edulooking at the precedence rules and associativity specifiers.
20042632Sstever@eecs.umich.edu
20052632Sstever@eecs.umich.edu<p>
20062632Sstever@eecs.umich.edu<ol>
20076498Snate@binkert.org<li>If the current token has higher precedence than the rule on the stack, it is shifted.
20082632Sstever@eecs.umich.edu<li>If the grammar rule on the stack has higher precedence, the rule is reduced.
20092632Sstever@eecs.umich.edu<li>If the current token and the grammar rule have the same precedence, the
20102632Sstever@eecs.umich.edurule is reduced for left associativity, whereas the token is shifted for right associativity.
20112632Sstever@eecs.umich.edu<li>If nothing is known about the precedence, shift/reduce conflicts are resolved in
20122632Sstever@eecs.umich.edufavor of shifting (the default).
20132632Sstever@eecs.umich.edu</ol>
20142632Sstever@eecs.umich.edu
20156498Snate@binkert.orgFor example, if "expression PLUS expression" has been parsed and the
20166498Snate@binkert.orgnext token is "TIMES", the action is going to be a shift because
20176498Snate@binkert.org"TIMES" has a higher precedence level than "PLUS".  On the other hand,
20186498Snate@binkert.orgif "expression TIMES expression" has been parsed and the next token is
20196498Snate@binkert.org"PLUS", the action is going to be reduce because "PLUS" has a lower
20206498Snate@binkert.orgprecedence than "TIMES."
20214479Sbinkertn@umich.edu
20222632Sstever@eecs.umich.edu<p>
20236498Snate@binkert.orgWhen shift/reduce conflicts are resolved using the first three
20246498Snate@binkert.orgtechniques (with the help of precedence rules), <tt>yacc.py</tt> will
20256498Snate@binkert.orgreport no errors or conflicts in the grammar (although it will print
20266498Snate@binkert.orgsome information in the <tt>parser.out</tt> debugging file).
20272632Sstever@eecs.umich.edu
20282632Sstever@eecs.umich.edu<p>
20296498Snate@binkert.orgOne problem with the precedence specifier technique is that it is
20306498Snate@binkert.orgsometimes necessary to change the precedence of an operator in certain
20316498Snate@binkert.orgcontexts.  For example, consider a unary-minus operator in "3 + 4 *
20326498Snate@binkert.org-5".  Mathematically, the unary minus is normally given a very high
20336498Snate@binkert.orgprecedence--being evaluated before the multiply.  However, in our
20346498Snate@binkert.orgprecedence specifier, MINUS has a lower precedence than TIMES.  To
20356498Snate@binkert.orgdeal with this, precedence rules can be given for so-called "fictitious tokens"
20366498Snate@binkert.orglike this:
20372632Sstever@eecs.umich.edu
20382632Sstever@eecs.umich.edu<blockquote>
20392632Sstever@eecs.umich.edu<pre>
20402632Sstever@eecs.umich.eduprecedence = (
20412632Sstever@eecs.umich.edu    ('left', 'PLUS', 'MINUS'),
20422632Sstever@eecs.umich.edu    ('left', 'TIMES', 'DIVIDE'),
20432632Sstever@eecs.umich.edu    ('right', 'UMINUS'),            # Unary minus operator
20442632Sstever@eecs.umich.edu)
20452632Sstever@eecs.umich.edu</pre>
20462632Sstever@eecs.umich.edu</blockquote>
20472632Sstever@eecs.umich.edu
20482632Sstever@eecs.umich.eduNow, in the grammar file, we can write our unary minus rule like this:
20492632Sstever@eecs.umich.edu
20502632Sstever@eecs.umich.edu<blockquote>
20512632Sstever@eecs.umich.edu<pre>
20524479Sbinkertn@umich.edudef p_expr_uminus(p):
20532632Sstever@eecs.umich.edu    'expression : MINUS expression %prec UMINUS'
20544479Sbinkertn@umich.edu    p[0] = -p[2]
20552632Sstever@eecs.umich.edu</pre>
20562632Sstever@eecs.umich.edu</blockquote>
20572632Sstever@eecs.umich.edu
20582632Sstever@eecs.umich.eduIn this case, <tt>%prec UMINUS</tt> overrides the default rule precedence--setting it to that
20592632Sstever@eecs.umich.eduof UMINUS in the precedence specifier.
20602632Sstever@eecs.umich.edu
20612632Sstever@eecs.umich.edu<p>
20624479Sbinkertn@umich.eduAt first, the use of UMINUS in this example may appear very confusing.
20634479Sbinkertn@umich.eduUMINUS is not an input token or a grammer rule.  Instead, you should
20644479Sbinkertn@umich.eduthink of it as the name of a special marker in the precedence table.   When you use the <tt>%prec</tt> qualifier, you're simply
20654479Sbinkertn@umich.edutelling yacc that you want the precedence of the expression to be the same as for this special marker instead of the usual precedence.
20664479Sbinkertn@umich.edu
20674479Sbinkertn@umich.edu<p>
20682632Sstever@eecs.umich.eduIt is also possible to specify non-associativity in the <tt>precedence</tt> table. This would
20692632Sstever@eecs.umich.edube used when you <em>don't</em> want operations to chain together.  For example, suppose
20704479Sbinkertn@umich.eduyou wanted to support comparison operators like <tt>&lt;</tt> and <tt>&gt;</tt> but you didn't want to allow
20712632Sstever@eecs.umich.educombinations like <tt>a &lt; b &lt; c</tt>.   To do this, simply specify a rule like this:
20722632Sstever@eecs.umich.edu
20732632Sstever@eecs.umich.edu<blockquote>
20742632Sstever@eecs.umich.edu<pre>
20752632Sstever@eecs.umich.eduprecedence = (
20762632Sstever@eecs.umich.edu    ('nonassoc', 'LESSTHAN', 'GREATERTHAN'),  # Nonassociative operators
20772632Sstever@eecs.umich.edu    ('left', 'PLUS', 'MINUS'),
20782632Sstever@eecs.umich.edu    ('left', 'TIMES', 'DIVIDE'),
20792632Sstever@eecs.umich.edu    ('right', 'UMINUS'),            # Unary minus operator
20802632Sstever@eecs.umich.edu)
20812632Sstever@eecs.umich.edu</pre>
20822632Sstever@eecs.umich.edu</blockquote>
20832632Sstever@eecs.umich.edu
20842632Sstever@eecs.umich.edu<p>
20854479Sbinkertn@umich.eduIf you do this, the occurrence of input text such as <tt> a &lt; b &lt; c</tt> will result in a syntax error.  However, simple
20864479Sbinkertn@umich.eduexpressions such as <tt>a &lt; b</tt> will still be fine.
20874479Sbinkertn@umich.edu
20884479Sbinkertn@umich.edu<p>
20892632Sstever@eecs.umich.eduReduce/reduce conflicts are caused when there are multiple grammar
20902632Sstever@eecs.umich.edurules that can be applied to a given set of symbols.  This kind of
20912632Sstever@eecs.umich.educonflict is almost always bad and is always resolved by picking the
20922632Sstever@eecs.umich.edurule that appears first in the grammar file.   Reduce/reduce conflicts
20932632Sstever@eecs.umich.eduare almost always caused when different sets of grammar rules somehow
20942632Sstever@eecs.umich.edugenerate the same set of symbols.  For example:
20952632Sstever@eecs.umich.edu
20962632Sstever@eecs.umich.edu<blockquote>
20972632Sstever@eecs.umich.edu<pre>
20982632Sstever@eecs.umich.eduassignment :  ID EQUALS NUMBER
20992632Sstever@eecs.umich.edu           |  ID EQUALS expression
21002632Sstever@eecs.umich.edu           
21012632Sstever@eecs.umich.eduexpression : expression PLUS expression
21022632Sstever@eecs.umich.edu           | expression MINUS expression
21032632Sstever@eecs.umich.edu           | expression TIMES expression
21042632Sstever@eecs.umich.edu           | expression DIVIDE expression
21052632Sstever@eecs.umich.edu           | LPAREN expression RPAREN
21062632Sstever@eecs.umich.edu           | NUMBER
21072632Sstever@eecs.umich.edu</pre>
21082632Sstever@eecs.umich.edu</blockquote>
21092632Sstever@eecs.umich.edu
21102632Sstever@eecs.umich.eduIn this case, a reduce/reduce conflict exists between these two rules:
21112632Sstever@eecs.umich.edu
21122632Sstever@eecs.umich.edu<blockquote>
21132632Sstever@eecs.umich.edu<pre>
21142632Sstever@eecs.umich.eduassignment  : ID EQUALS NUMBER
21152632Sstever@eecs.umich.eduexpression  : NUMBER
21162632Sstever@eecs.umich.edu</pre>
21172632Sstever@eecs.umich.edu</blockquote>
21182632Sstever@eecs.umich.edu
21192632Sstever@eecs.umich.eduFor example, if you wrote "a = 5", the parser can't figure out if this
21204479Sbinkertn@umich.eduis supposed to be reduced as <tt>assignment : ID EQUALS NUMBER</tt> or
21212632Sstever@eecs.umich.eduwhether it's supposed to reduce the 5 as an expression and then reduce
21222632Sstever@eecs.umich.eduthe rule <tt>assignment : ID EQUALS expression</tt>.
21232632Sstever@eecs.umich.edu
21244479Sbinkertn@umich.edu<p>
21256498Snate@binkert.orgIt should be noted that reduce/reduce conflicts are notoriously
21266498Snate@binkert.orgdifficult to spot simply looking at the input grammer.  When a
21276498Snate@binkert.orgreduce/reduce conflict occurs, <tt>yacc()</tt> will try to help by
21286498Snate@binkert.orgprinting a warning message such as this:
21296498Snate@binkert.org
21306498Snate@binkert.org<blockquote>
21316498Snate@binkert.org<pre>
21326498Snate@binkert.orgWARNING: 1 reduce/reduce conflict
21336498Snate@binkert.orgWARNING: reduce/reduce conflict in state 15 resolved using rule (assignment -> ID EQUALS NUMBER)
21346498Snate@binkert.orgWARNING: rejected rule (expression -> NUMBER)
21356498Snate@binkert.org</pre>
21366498Snate@binkert.org</blockquote>
21376498Snate@binkert.org
21386498Snate@binkert.orgThis message identifies the two rules that are in conflict.  However,
21396498Snate@binkert.orgit may not tell you how the parser arrived at such a state.  To try
21406498Snate@binkert.organd figure it out, you'll probably have to look at your grammar and
21416498Snate@binkert.orgthe contents of the
21426498Snate@binkert.org<tt>parser.out</tt> debugging file with an appropriately high level of
21436498Snate@binkert.orgcaffeination.
21446498Snate@binkert.org
21456498Snate@binkert.org<H3><a name="ply_nn28"></a>6.7 The parser.out file</H3>
21464479Sbinkertn@umich.edu
21472632Sstever@eecs.umich.edu
21482632Sstever@eecs.umich.eduTracking down shift/reduce and reduce/reduce conflicts is one of the finer pleasures of using an LR
21492632Sstever@eecs.umich.eduparsing algorithm.  To assist in debugging, <tt>yacc.py</tt> creates a debugging file called
21502632Sstever@eecs.umich.edu'parser.out' when it generates the parsing table.   The contents of this file look like the following:
21512632Sstever@eecs.umich.edu
21522632Sstever@eecs.umich.edu<blockquote>
21532632Sstever@eecs.umich.edu<pre>
21542632Sstever@eecs.umich.eduUnused terminals:
21552632Sstever@eecs.umich.edu
21562632Sstever@eecs.umich.edu
21572632Sstever@eecs.umich.eduGrammar
21582632Sstever@eecs.umich.edu
21592632Sstever@eecs.umich.eduRule 1     expression -> expression PLUS expression
21602632Sstever@eecs.umich.eduRule 2     expression -> expression MINUS expression
21612632Sstever@eecs.umich.eduRule 3     expression -> expression TIMES expression
21622632Sstever@eecs.umich.eduRule 4     expression -> expression DIVIDE expression
21632632Sstever@eecs.umich.eduRule 5     expression -> NUMBER
21642632Sstever@eecs.umich.eduRule 6     expression -> LPAREN expression RPAREN
21652632Sstever@eecs.umich.edu
21662632Sstever@eecs.umich.eduTerminals, with rules where they appear
21672632Sstever@eecs.umich.edu
21682632Sstever@eecs.umich.eduTIMES                : 3
21692632Sstever@eecs.umich.eduerror                : 
21702632Sstever@eecs.umich.eduMINUS                : 2
21712632Sstever@eecs.umich.eduRPAREN               : 6
21722632Sstever@eecs.umich.eduLPAREN               : 6
21732632Sstever@eecs.umich.eduDIVIDE               : 4
21742632Sstever@eecs.umich.eduPLUS                 : 1
21752632Sstever@eecs.umich.eduNUMBER               : 5
21762632Sstever@eecs.umich.edu
21772632Sstever@eecs.umich.eduNonterminals, with rules where they appear
21782632Sstever@eecs.umich.edu
21792632Sstever@eecs.umich.eduexpression           : 1 1 2 2 3 3 4 4 6 0
21802632Sstever@eecs.umich.edu
21812632Sstever@eecs.umich.edu
21824479Sbinkertn@umich.eduParsing method: LALR
21832632Sstever@eecs.umich.edu
21842632Sstever@eecs.umich.edu
21852632Sstever@eecs.umich.edustate 0
21862632Sstever@eecs.umich.edu
21872632Sstever@eecs.umich.edu    S' -> . expression
21882632Sstever@eecs.umich.edu    expression -> . expression PLUS expression
21892632Sstever@eecs.umich.edu    expression -> . expression MINUS expression
21902632Sstever@eecs.umich.edu    expression -> . expression TIMES expression
21912632Sstever@eecs.umich.edu    expression -> . expression DIVIDE expression
21922632Sstever@eecs.umich.edu    expression -> . NUMBER
21932632Sstever@eecs.umich.edu    expression -> . LPAREN expression RPAREN
21942632Sstever@eecs.umich.edu
21952632Sstever@eecs.umich.edu    NUMBER          shift and go to state 3
21962632Sstever@eecs.umich.edu    LPAREN          shift and go to state 2
21972632Sstever@eecs.umich.edu
21982632Sstever@eecs.umich.edu
21992632Sstever@eecs.umich.edustate 1
22002632Sstever@eecs.umich.edu
22012632Sstever@eecs.umich.edu    S' -> expression .
22022632Sstever@eecs.umich.edu    expression -> expression . PLUS expression
22032632Sstever@eecs.umich.edu    expression -> expression . MINUS expression
22042632Sstever@eecs.umich.edu    expression -> expression . TIMES expression
22052632Sstever@eecs.umich.edu    expression -> expression . DIVIDE expression
22062632Sstever@eecs.umich.edu
22072632Sstever@eecs.umich.edu    PLUS            shift and go to state 6
22082632Sstever@eecs.umich.edu    MINUS           shift and go to state 5
22092632Sstever@eecs.umich.edu    TIMES           shift and go to state 4
22102632Sstever@eecs.umich.edu    DIVIDE          shift and go to state 7
22112632Sstever@eecs.umich.edu
22122632Sstever@eecs.umich.edu
22132632Sstever@eecs.umich.edustate 2
22142632Sstever@eecs.umich.edu
22152632Sstever@eecs.umich.edu    expression -> LPAREN . expression RPAREN
22162632Sstever@eecs.umich.edu    expression -> . expression PLUS expression
22172632Sstever@eecs.umich.edu    expression -> . expression MINUS expression
22182632Sstever@eecs.umich.edu    expression -> . expression TIMES expression
22192632Sstever@eecs.umich.edu    expression -> . expression DIVIDE expression
22202632Sstever@eecs.umich.edu    expression -> . NUMBER
22212632Sstever@eecs.umich.edu    expression -> . LPAREN expression RPAREN
22222632Sstever@eecs.umich.edu
22232632Sstever@eecs.umich.edu    NUMBER          shift and go to state 3
22242632Sstever@eecs.umich.edu    LPAREN          shift and go to state 2
22252632Sstever@eecs.umich.edu
22262632Sstever@eecs.umich.edu
22272632Sstever@eecs.umich.edustate 3
22282632Sstever@eecs.umich.edu
22292632Sstever@eecs.umich.edu    expression -> NUMBER .
22302632Sstever@eecs.umich.edu
22312632Sstever@eecs.umich.edu    $               reduce using rule 5
22322632Sstever@eecs.umich.edu    PLUS            reduce using rule 5
22332632Sstever@eecs.umich.edu    MINUS           reduce using rule 5
22342632Sstever@eecs.umich.edu    TIMES           reduce using rule 5
22352632Sstever@eecs.umich.edu    DIVIDE          reduce using rule 5
22362632Sstever@eecs.umich.edu    RPAREN          reduce using rule 5
22372632Sstever@eecs.umich.edu
22382632Sstever@eecs.umich.edu
22392632Sstever@eecs.umich.edustate 4
22402632Sstever@eecs.umich.edu
22412632Sstever@eecs.umich.edu    expression -> expression TIMES . expression
22422632Sstever@eecs.umich.edu    expression -> . expression PLUS expression
22432632Sstever@eecs.umich.edu    expression -> . expression MINUS expression
22442632Sstever@eecs.umich.edu    expression -> . expression TIMES expression
22452632Sstever@eecs.umich.edu    expression -> . expression DIVIDE expression
22462632Sstever@eecs.umich.edu    expression -> . NUMBER
22472632Sstever@eecs.umich.edu    expression -> . LPAREN expression RPAREN
22482632Sstever@eecs.umich.edu
22492632Sstever@eecs.umich.edu    NUMBER          shift and go to state 3
22502632Sstever@eecs.umich.edu    LPAREN          shift and go to state 2
22512632Sstever@eecs.umich.edu
22522632Sstever@eecs.umich.edu
22532632Sstever@eecs.umich.edustate 5
22542632Sstever@eecs.umich.edu
22552632Sstever@eecs.umich.edu    expression -> expression MINUS . expression
22562632Sstever@eecs.umich.edu    expression -> . expression PLUS expression
22572632Sstever@eecs.umich.edu    expression -> . expression MINUS expression
22582632Sstever@eecs.umich.edu    expression -> . expression TIMES expression
22592632Sstever@eecs.umich.edu    expression -> . expression DIVIDE expression
22602632Sstever@eecs.umich.edu    expression -> . NUMBER
22612632Sstever@eecs.umich.edu    expression -> . LPAREN expression RPAREN
22622632Sstever@eecs.umich.edu
22632632Sstever@eecs.umich.edu    NUMBER          shift and go to state 3
22642632Sstever@eecs.umich.edu    LPAREN          shift and go to state 2
22652632Sstever@eecs.umich.edu
22662632Sstever@eecs.umich.edu
22672632Sstever@eecs.umich.edustate 6
22682632Sstever@eecs.umich.edu
22692632Sstever@eecs.umich.edu    expression -> expression PLUS . expression
22702632Sstever@eecs.umich.edu    expression -> . expression PLUS expression
22712632Sstever@eecs.umich.edu    expression -> . expression MINUS expression
22722632Sstever@eecs.umich.edu    expression -> . expression TIMES expression
22732632Sstever@eecs.umich.edu    expression -> . expression DIVIDE expression
22742632Sstever@eecs.umich.edu    expression -> . NUMBER
22752632Sstever@eecs.umich.edu    expression -> . LPAREN expression RPAREN
22762632Sstever@eecs.umich.edu
22772632Sstever@eecs.umich.edu    NUMBER          shift and go to state 3
22782632Sstever@eecs.umich.edu    LPAREN          shift and go to state 2
22792632Sstever@eecs.umich.edu
22802632Sstever@eecs.umich.edu
22812632Sstever@eecs.umich.edustate 7
22822632Sstever@eecs.umich.edu
22832632Sstever@eecs.umich.edu    expression -> expression DIVIDE . expression
22842632Sstever@eecs.umich.edu    expression -> . expression PLUS expression
22852632Sstever@eecs.umich.edu    expression -> . expression MINUS expression
22862632Sstever@eecs.umich.edu    expression -> . expression TIMES expression
22872632Sstever@eecs.umich.edu    expression -> . expression DIVIDE expression
22882632Sstever@eecs.umich.edu    expression -> . NUMBER
22892632Sstever@eecs.umich.edu    expression -> . LPAREN expression RPAREN
22902632Sstever@eecs.umich.edu
22912632Sstever@eecs.umich.edu    NUMBER          shift and go to state 3
22922632Sstever@eecs.umich.edu    LPAREN          shift and go to state 2
22932632Sstever@eecs.umich.edu
22942632Sstever@eecs.umich.edu
22952632Sstever@eecs.umich.edustate 8
22962632Sstever@eecs.umich.edu
22972632Sstever@eecs.umich.edu    expression -> LPAREN expression . RPAREN
22982632Sstever@eecs.umich.edu    expression -> expression . PLUS expression
22992632Sstever@eecs.umich.edu    expression -> expression . MINUS expression
23002632Sstever@eecs.umich.edu    expression -> expression . TIMES expression
23012632Sstever@eecs.umich.edu    expression -> expression . DIVIDE expression
23022632Sstever@eecs.umich.edu
23032632Sstever@eecs.umich.edu    RPAREN          shift and go to state 13
23042632Sstever@eecs.umich.edu    PLUS            shift and go to state 6
23052632Sstever@eecs.umich.edu    MINUS           shift and go to state 5
23062632Sstever@eecs.umich.edu    TIMES           shift and go to state 4
23072632Sstever@eecs.umich.edu    DIVIDE          shift and go to state 7
23082632Sstever@eecs.umich.edu
23092632Sstever@eecs.umich.edu
23102632Sstever@eecs.umich.edustate 9
23112632Sstever@eecs.umich.edu
23122632Sstever@eecs.umich.edu    expression -> expression TIMES expression .
23132632Sstever@eecs.umich.edu    expression -> expression . PLUS expression
23142632Sstever@eecs.umich.edu    expression -> expression . MINUS expression
23152632Sstever@eecs.umich.edu    expression -> expression . TIMES expression
23162632Sstever@eecs.umich.edu    expression -> expression . DIVIDE expression
23172632Sstever@eecs.umich.edu
23182632Sstever@eecs.umich.edu    $               reduce using rule 3
23192632Sstever@eecs.umich.edu    PLUS            reduce using rule 3
23202632Sstever@eecs.umich.edu    MINUS           reduce using rule 3
23212632Sstever@eecs.umich.edu    TIMES           reduce using rule 3
23222632Sstever@eecs.umich.edu    DIVIDE          reduce using rule 3
23232632Sstever@eecs.umich.edu    RPAREN          reduce using rule 3
23242632Sstever@eecs.umich.edu
23252632Sstever@eecs.umich.edu  ! PLUS            [ shift and go to state 6 ]
23262632Sstever@eecs.umich.edu  ! MINUS           [ shift and go to state 5 ]
23272632Sstever@eecs.umich.edu  ! TIMES           [ shift and go to state 4 ]
23282632Sstever@eecs.umich.edu  ! DIVIDE          [ shift and go to state 7 ]
23292632Sstever@eecs.umich.edu
23302632Sstever@eecs.umich.edustate 10
23312632Sstever@eecs.umich.edu
23322632Sstever@eecs.umich.edu    expression -> expression MINUS expression .
23332632Sstever@eecs.umich.edu    expression -> expression . PLUS expression
23342632Sstever@eecs.umich.edu    expression -> expression . MINUS expression
23352632Sstever@eecs.umich.edu    expression -> expression . TIMES expression
23362632Sstever@eecs.umich.edu    expression -> expression . DIVIDE expression
23372632Sstever@eecs.umich.edu
23382632Sstever@eecs.umich.edu    $               reduce using rule 2
23392632Sstever@eecs.umich.edu    PLUS            reduce using rule 2
23402632Sstever@eecs.umich.edu    MINUS           reduce using rule 2
23412632Sstever@eecs.umich.edu    RPAREN          reduce using rule 2
23422632Sstever@eecs.umich.edu    TIMES           shift and go to state 4
23432632Sstever@eecs.umich.edu    DIVIDE          shift and go to state 7
23442632Sstever@eecs.umich.edu
23452632Sstever@eecs.umich.edu  ! TIMES           [ reduce using rule 2 ]
23462632Sstever@eecs.umich.edu  ! DIVIDE          [ reduce using rule 2 ]
23472632Sstever@eecs.umich.edu  ! PLUS            [ shift and go to state 6 ]
23482632Sstever@eecs.umich.edu  ! MINUS           [ shift and go to state 5 ]
23492632Sstever@eecs.umich.edu
23502632Sstever@eecs.umich.edustate 11
23512632Sstever@eecs.umich.edu
23522632Sstever@eecs.umich.edu    expression -> expression PLUS expression .
23532632Sstever@eecs.umich.edu    expression -> expression . PLUS expression
23542632Sstever@eecs.umich.edu    expression -> expression . MINUS expression
23552632Sstever@eecs.umich.edu    expression -> expression . TIMES expression
23562632Sstever@eecs.umich.edu    expression -> expression . DIVIDE expression
23572632Sstever@eecs.umich.edu
23582632Sstever@eecs.umich.edu    $               reduce using rule 1
23592632Sstever@eecs.umich.edu    PLUS            reduce using rule 1
23602632Sstever@eecs.umich.edu    MINUS           reduce using rule 1
23612632Sstever@eecs.umich.edu    RPAREN          reduce using rule 1
23622632Sstever@eecs.umich.edu    TIMES           shift and go to state 4
23632632Sstever@eecs.umich.edu    DIVIDE          shift and go to state 7
23642632Sstever@eecs.umich.edu
23652632Sstever@eecs.umich.edu  ! TIMES           [ reduce using rule 1 ]
23662632Sstever@eecs.umich.edu  ! DIVIDE          [ reduce using rule 1 ]
23672632Sstever@eecs.umich.edu  ! PLUS            [ shift and go to state 6 ]
23682632Sstever@eecs.umich.edu  ! MINUS           [ shift and go to state 5 ]
23692632Sstever@eecs.umich.edu
23702632Sstever@eecs.umich.edustate 12
23712632Sstever@eecs.umich.edu
23722632Sstever@eecs.umich.edu    expression -> expression DIVIDE expression .
23732632Sstever@eecs.umich.edu    expression -> expression . PLUS expression
23742632Sstever@eecs.umich.edu    expression -> expression . MINUS expression
23752632Sstever@eecs.umich.edu    expression -> expression . TIMES expression
23762632Sstever@eecs.umich.edu    expression -> expression . DIVIDE expression
23772632Sstever@eecs.umich.edu
23782632Sstever@eecs.umich.edu    $               reduce using rule 4
23792632Sstever@eecs.umich.edu    PLUS            reduce using rule 4
23802632Sstever@eecs.umich.edu    MINUS           reduce using rule 4
23812632Sstever@eecs.umich.edu    TIMES           reduce using rule 4
23822632Sstever@eecs.umich.edu    DIVIDE          reduce using rule 4
23832632Sstever@eecs.umich.edu    RPAREN          reduce using rule 4
23842632Sstever@eecs.umich.edu
23852632Sstever@eecs.umich.edu  ! PLUS            [ shift and go to state 6 ]
23862632Sstever@eecs.umich.edu  ! MINUS           [ shift and go to state 5 ]
23872632Sstever@eecs.umich.edu  ! TIMES           [ shift and go to state 4 ]
23882632Sstever@eecs.umich.edu  ! DIVIDE          [ shift and go to state 7 ]
23892632Sstever@eecs.umich.edu
23902632Sstever@eecs.umich.edustate 13
23912632Sstever@eecs.umich.edu
23922632Sstever@eecs.umich.edu    expression -> LPAREN expression RPAREN .
23932632Sstever@eecs.umich.edu
23942632Sstever@eecs.umich.edu    $               reduce using rule 6
23952632Sstever@eecs.umich.edu    PLUS            reduce using rule 6
23962632Sstever@eecs.umich.edu    MINUS           reduce using rule 6
23972632Sstever@eecs.umich.edu    TIMES           reduce using rule 6
23982632Sstever@eecs.umich.edu    DIVIDE          reduce using rule 6
23992632Sstever@eecs.umich.edu    RPAREN          reduce using rule 6
24002632Sstever@eecs.umich.edu</pre>
24012632Sstever@eecs.umich.edu</blockquote>
24022632Sstever@eecs.umich.edu
24036498Snate@binkert.orgThe different states that appear in this file are a representation of
24046498Snate@binkert.orgevery possible sequence of valid input tokens allowed by the grammar.
24056498Snate@binkert.orgWhen receiving input tokens, the parser is building up a stack and
24066498Snate@binkert.orglooking for matching rules.  Each state keeps track of the grammar
24076498Snate@binkert.orgrules that might be in the process of being matched at that point.  Within each
24086498Snate@binkert.orgrule, the "." character indicates the current location of the parse
24096498Snate@binkert.orgwithin that rule.  In addition, the actions for each valid input token
24106498Snate@binkert.orgare listed.  When a shift/reduce or reduce/reduce conflict arises,
24116498Snate@binkert.orgrules <em>not</em> selected are prefixed with an !.  For example:
24122632Sstever@eecs.umich.edu
24132632Sstever@eecs.umich.edu<blockquote>
24142632Sstever@eecs.umich.edu<pre>
24152632Sstever@eecs.umich.edu  ! TIMES           [ reduce using rule 2 ]
24162632Sstever@eecs.umich.edu  ! DIVIDE          [ reduce using rule 2 ]
24172632Sstever@eecs.umich.edu  ! PLUS            [ shift and go to state 6 ]
24182632Sstever@eecs.umich.edu  ! MINUS           [ shift and go to state 5 ]
24192632Sstever@eecs.umich.edu</pre>
24202632Sstever@eecs.umich.edu</blockquote>
24212632Sstever@eecs.umich.edu
24222632Sstever@eecs.umich.eduBy looking at these rules (and with a little practice), you can usually track down the source
24232632Sstever@eecs.umich.eduof most parsing conflicts.  It should also be stressed that not all shift-reduce conflicts are
24242632Sstever@eecs.umich.edubad.  However, the only way to be sure that they are resolved correctly is to look at <tt>parser.out</tt>.
24252632Sstever@eecs.umich.edu  
24266498Snate@binkert.org<H3><a name="ply_nn29"></a>6.8 Syntax Error Handling</H3>
24276498Snate@binkert.org
24286498Snate@binkert.org
24296498Snate@binkert.orgIf you are creating a parser for production use, the handling of
24306498Snate@binkert.orgsyntax errors is important.  As a general rule, you don't want a
24316498Snate@binkert.orgparser to simply throw up its hands and stop at the first sign of
24326498Snate@binkert.orgtrouble.  Instead, you want it to report the error, recover if possible, and
24336498Snate@binkert.orgcontinue parsing so that all of the errors in the input get reported
24346498Snate@binkert.orgto the user at once.   This is the standard behavior found in compilers
24356498Snate@binkert.orgfor languages such as C, C++, and Java.
24366498Snate@binkert.org
24376498Snate@binkert.orgIn PLY, when a syntax error occurs during parsing, the error is immediately
24382632Sstever@eecs.umich.edudetected (i.e., the parser does not read any more tokens beyond the
24396498Snate@binkert.orgsource of the error).  However, at this point, the parser enters a
24406498Snate@binkert.orgrecovery mode that can be used to try and continue further parsing.
24416498Snate@binkert.orgAs a general rule, error recovery in LR parsers is a delicate
24422632Sstever@eecs.umich.edutopic that involves ancient rituals and black-magic.   The recovery mechanism
24432632Sstever@eecs.umich.eduprovided by <tt>yacc.py</tt> is comparable to Unix yacc so you may want
24442632Sstever@eecs.umich.educonsult a book like O'Reilly's "Lex and Yacc" for some of the finer details.
24452632Sstever@eecs.umich.edu
24462632Sstever@eecs.umich.edu<p>
24472632Sstever@eecs.umich.eduWhen a syntax error occurs, <tt>yacc.py</tt> performs the following steps:
24482632Sstever@eecs.umich.edu
24492632Sstever@eecs.umich.edu<ol>
24502632Sstever@eecs.umich.edu<li>On the first occurrence of an error, the user-defined <tt>p_error()</tt> function
24516498Snate@binkert.orgis called with the offending token as an argument.  However, if the syntax error is due to
24526498Snate@binkert.orgreaching the end-of-file, <tt>p_error()</tt> is called with an argument of <tt>None</tt>.
24536498Snate@binkert.orgAfterwards, the parser enters
24542632Sstever@eecs.umich.eduan "error-recovery" mode in which it will not make future calls to <tt>p_error()</tt> until it
24552632Sstever@eecs.umich.eduhas successfully shifted at least 3 tokens onto the parsing stack.
24562632Sstever@eecs.umich.edu
24572632Sstever@eecs.umich.edu<p>
24582632Sstever@eecs.umich.edu<li>If no recovery action is taken in <tt>p_error()</tt>, the offending lookahead token is replaced
24592632Sstever@eecs.umich.eduwith a special <tt>error</tt> token.
24602632Sstever@eecs.umich.edu
24612632Sstever@eecs.umich.edu<p>
24622632Sstever@eecs.umich.edu<li>If the offending lookahead token is already set to <tt>error</tt>, the top item of the parsing stack is
24632632Sstever@eecs.umich.edudeleted.
24642632Sstever@eecs.umich.edu
24652632Sstever@eecs.umich.edu<p>
24662632Sstever@eecs.umich.edu<li>If the entire parsing stack is unwound, the parser enters a restart state and attempts to start
24672632Sstever@eecs.umich.eduparsing from its initial state.
24682632Sstever@eecs.umich.edu
24692632Sstever@eecs.umich.edu<p>
24702632Sstever@eecs.umich.edu<li>If a grammar rule accepts <tt>error</tt> as a token, it will be
24712632Sstever@eecs.umich.edushifted onto the parsing stack.
24722632Sstever@eecs.umich.edu
24732632Sstever@eecs.umich.edu<p>
24742632Sstever@eecs.umich.edu<li>If the top item of the parsing stack is <tt>error</tt>, lookahead tokens will be discarded until the
24752632Sstever@eecs.umich.eduparser can successfully shift a new symbol or reduce a rule involving <tt>error</tt>.
24762632Sstever@eecs.umich.edu</ol>
24772632Sstever@eecs.umich.edu
24786498Snate@binkert.org<H4><a name="ply_nn30"></a>6.8.1 Recovery and resynchronization with error rules</H4>
24794479Sbinkertn@umich.edu
24802632Sstever@eecs.umich.edu
24812632Sstever@eecs.umich.eduThe most well-behaved approach for handling syntax errors is to write grammar rules that include the <tt>error</tt>
24822632Sstever@eecs.umich.edutoken.  For example, suppose your language had a grammar rule for a print statement like this:
24832632Sstever@eecs.umich.edu
24842632Sstever@eecs.umich.edu<blockquote>
24852632Sstever@eecs.umich.edu<pre>
24864479Sbinkertn@umich.edudef p_statement_print(p):
24872632Sstever@eecs.umich.edu     'statement : PRINT expr SEMI'
24882632Sstever@eecs.umich.edu     ...
24892632Sstever@eecs.umich.edu</pre>
24902632Sstever@eecs.umich.edu</blockquote>
24912632Sstever@eecs.umich.edu
24922632Sstever@eecs.umich.eduTo account for the possibility of a bad expression, you might write an additional grammar rule like this:
24932632Sstever@eecs.umich.edu
24942632Sstever@eecs.umich.edu<blockquote>
24952632Sstever@eecs.umich.edu<pre>
24964479Sbinkertn@umich.edudef p_statement_print_error(p):
24972632Sstever@eecs.umich.edu     'statement : PRINT error SEMI'
24982632Sstever@eecs.umich.edu     print "Syntax error in print statement. Bad expression"
24992632Sstever@eecs.umich.edu
25002632Sstever@eecs.umich.edu</pre>
25012632Sstever@eecs.umich.edu</blockquote>
25022632Sstever@eecs.umich.edu
25032632Sstever@eecs.umich.eduIn this case, the <tt>error</tt> token will match any sequence of
25042632Sstever@eecs.umich.edutokens that might appear up to the first semicolon that is
25052632Sstever@eecs.umich.eduencountered.  Once the semicolon is reached, the rule will be
25062632Sstever@eecs.umich.eduinvoked and the <tt>error</tt> token will go away.
25072632Sstever@eecs.umich.edu
25082632Sstever@eecs.umich.edu<p>
25092632Sstever@eecs.umich.eduThis type of recovery is sometimes known as parser resynchronization.
25102632Sstever@eecs.umich.eduThe <tt>error</tt> token acts as a wildcard for any bad input text and
25112632Sstever@eecs.umich.eduthe token immediately following <tt>error</tt> acts as a
25122632Sstever@eecs.umich.edusynchronization token.
25132632Sstever@eecs.umich.edu
25142632Sstever@eecs.umich.edu<p>
25152632Sstever@eecs.umich.eduIt is important to note that the <tt>error</tt> token usually does not appear as the last token
25162632Sstever@eecs.umich.eduon the right in an error rule.  For example:
25172632Sstever@eecs.umich.edu
25182632Sstever@eecs.umich.edu<blockquote>
25192632Sstever@eecs.umich.edu<pre>
25204479Sbinkertn@umich.edudef p_statement_print_error(p):
25212632Sstever@eecs.umich.edu    'statement : PRINT error'
25222632Sstever@eecs.umich.edu    print "Syntax error in print statement. Bad expression"
25232632Sstever@eecs.umich.edu</pre>
25242632Sstever@eecs.umich.edu</blockquote>
25252632Sstever@eecs.umich.edu
25262632Sstever@eecs.umich.eduThis is because the first bad token encountered will cause the rule to
25272632Sstever@eecs.umich.edube reduced--which may make it difficult to recover if more bad tokens
25282632Sstever@eecs.umich.eduimmediately follow.   
25292632Sstever@eecs.umich.edu
25306498Snate@binkert.org<H4><a name="ply_nn31"></a>6.8.2 Panic mode recovery</H4>
25314479Sbinkertn@umich.edu
25322632Sstever@eecs.umich.edu
25332632Sstever@eecs.umich.eduAn alternative error recovery scheme is to enter a panic mode recovery in which tokens are
25342632Sstever@eecs.umich.edudiscarded to a point where the parser might be able to recover in some sensible manner.
25352632Sstever@eecs.umich.edu
25362632Sstever@eecs.umich.edu<p>
25372632Sstever@eecs.umich.eduPanic mode recovery is implemented entirely in the <tt>p_error()</tt> function.  For example, this
25382632Sstever@eecs.umich.edufunction starts discarding tokens until it reaches a closing '}'.  Then, it restarts the 
25392632Sstever@eecs.umich.eduparser in its initial state.
25402632Sstever@eecs.umich.edu
25412632Sstever@eecs.umich.edu<blockquote>
25422632Sstever@eecs.umich.edu<pre>
25434479Sbinkertn@umich.edudef p_error(p):
25442632Sstever@eecs.umich.edu    print "Whoa. You are seriously hosed."
25452632Sstever@eecs.umich.edu    # Read ahead looking for a closing '}'
25462632Sstever@eecs.umich.edu    while 1:
25472632Sstever@eecs.umich.edu        tok = yacc.token()             # Get the next token
25482632Sstever@eecs.umich.edu        if not tok or tok.type == 'RBRACE': break
25492632Sstever@eecs.umich.edu    yacc.restart()
25502632Sstever@eecs.umich.edu</pre>
25512632Sstever@eecs.umich.edu</blockquote>
25522632Sstever@eecs.umich.edu
25532632Sstever@eecs.umich.edu<p>
25542632Sstever@eecs.umich.eduThis function simply discards the bad token and tells the parser that the error was ok.
25552632Sstever@eecs.umich.edu
25562632Sstever@eecs.umich.edu<blockquote>
25572632Sstever@eecs.umich.edu<pre>
25584479Sbinkertn@umich.edudef p_error(p):
25594479Sbinkertn@umich.edu    print "Syntax error at token", p.type
25602632Sstever@eecs.umich.edu    # Just discard the token and tell the parser it's okay.
25612632Sstever@eecs.umich.edu    yacc.errok()
25622632Sstever@eecs.umich.edu</pre>
25632632Sstever@eecs.umich.edu</blockquote>
25642632Sstever@eecs.umich.edu
25652632Sstever@eecs.umich.edu<P>
25662632Sstever@eecs.umich.eduWithin the <tt>p_error()</tt> function, three functions are available to control the behavior
25672632Sstever@eecs.umich.eduof the parser:
25682632Sstever@eecs.umich.edu<p>
25692632Sstever@eecs.umich.edu<ul>
25702632Sstever@eecs.umich.edu<li><tt>yacc.errok()</tt>.  This resets the parser state so it doesn't think it's in error-recovery
25712632Sstever@eecs.umich.edumode.   This will prevent an <tt>error</tt> token from being generated and will reset the internal
25722632Sstever@eecs.umich.eduerror counters so that the next syntax error will call <tt>p_error()</tt> again.
25732632Sstever@eecs.umich.edu
25742632Sstever@eecs.umich.edu<p>
25752632Sstever@eecs.umich.edu<li><tt>yacc.token()</tt>.  This returns the next token on the input stream.
25762632Sstever@eecs.umich.edu
25772632Sstever@eecs.umich.edu<p>
25782632Sstever@eecs.umich.edu<li><tt>yacc.restart()</tt>.  This discards the entire parsing stack and resets the parser
25792632Sstever@eecs.umich.eduto its initial state. 
25802632Sstever@eecs.umich.edu</ul>
25812632Sstever@eecs.umich.edu
25822632Sstever@eecs.umich.eduNote: these functions are only available when invoking <tt>p_error()</tt> and are not available
25832632Sstever@eecs.umich.eduat any other time.
25842632Sstever@eecs.umich.edu
25852632Sstever@eecs.umich.edu<p>
25862632Sstever@eecs.umich.eduTo supply the next lookahead token to the parser, <tt>p_error()</tt> can return a token.  This might be
25872632Sstever@eecs.umich.eduuseful if trying to synchronize on special characters.  For example:
25882632Sstever@eecs.umich.edu
25892632Sstever@eecs.umich.edu<blockquote>
25902632Sstever@eecs.umich.edu<pre>
25914479Sbinkertn@umich.edudef p_error(p):
25922632Sstever@eecs.umich.edu    # Read ahead looking for a terminating ";"
25932632Sstever@eecs.umich.edu    while 1:
25942632Sstever@eecs.umich.edu        tok = yacc.token()             # Get the next token
25952632Sstever@eecs.umich.edu        if not tok or tok.type == 'SEMI': break
25962632Sstever@eecs.umich.edu    yacc.errok()
25972632Sstever@eecs.umich.edu
25982632Sstever@eecs.umich.edu    # Return SEMI to the parser as the next lookahead token
25992632Sstever@eecs.umich.edu    return tok  
26002632Sstever@eecs.umich.edu</pre>
26012632Sstever@eecs.umich.edu</blockquote>
26022632Sstever@eecs.umich.edu
26036498Snate@binkert.org<H4><a name="ply_nn35"></a>6.8.3 Signaling an error from a production</H4>
26046498Snate@binkert.org
26056498Snate@binkert.org
26066498Snate@binkert.orgIf necessary, a production rule can manually force the parser to enter error recovery.  This
26076498Snate@binkert.orgis done by raising the <tt>SyntaxError</tt> exception like this:
26086498Snate@binkert.org
26096498Snate@binkert.org<blockquote>
26106498Snate@binkert.org<pre>
26116498Snate@binkert.orgdef p_production(p):
26126498Snate@binkert.org    'production : some production ...'
26136498Snate@binkert.org    raise SyntaxError
26146498Snate@binkert.org</pre>
26156498Snate@binkert.org</blockquote>
26166498Snate@binkert.org
26176498Snate@binkert.orgThe effect of raising <tt>SyntaxError</tt> is the same as if the last symbol shifted onto the
26186498Snate@binkert.orgparsing stack was actually a syntax error.  Thus, when you do this, the last symbol shifted is popped off
26196498Snate@binkert.orgof the parsing stack and the current lookahead token is set to an <tt>error</tt> token.   The parser
26206498Snate@binkert.orgthen enters error-recovery mode where it tries to reduce rules that can accept <tt>error</tt> tokens.  
26216498Snate@binkert.orgThe steps that follow from this point are exactly the same as if a syntax error were detected and 
26226498Snate@binkert.org<tt>p_error()</tt> were called.
26236498Snate@binkert.org
26246498Snate@binkert.org<P>
26256498Snate@binkert.orgOne important aspect of manually setting an error is that the <tt>p_error()</tt> function will <b>NOT</b> be
26266498Snate@binkert.orgcalled in this case.   If you need to issue an error message, make sure you do it in the production that
26276498Snate@binkert.orgraises <tt>SyntaxError</tt>.
26286498Snate@binkert.org
26296498Snate@binkert.org<P>
26306498Snate@binkert.orgNote: This feature of PLY is meant to mimic the behavior of the YYERROR macro in yacc.
26316498Snate@binkert.org
26326498Snate@binkert.org
26336498Snate@binkert.org<H4><a name="ply_nn32"></a>6.8.4 General comments on error handling</H4>
26344479Sbinkertn@umich.edu
26352632Sstever@eecs.umich.edu
26362632Sstever@eecs.umich.eduFor normal types of languages, error recovery with error rules and resynchronization characters is probably the most reliable
26372632Sstever@eecs.umich.edutechnique. This is because you can instrument the grammar to catch errors at selected places where it is relatively easy 
26382632Sstever@eecs.umich.eduto recover and continue parsing.  Panic mode recovery is really only useful in certain specialized applications where you might want
26392632Sstever@eecs.umich.eduto discard huge portions of the input text to find a valid restart point.
26402632Sstever@eecs.umich.edu
26416498Snate@binkert.org<H3><a name="ply_nn33"></a>6.9 Line Number and Position Tracking</H3>
26426498Snate@binkert.org
26436498Snate@binkert.org
26446498Snate@binkert.orgPosition tracking is often a tricky problem when writing compilers.
26456498Snate@binkert.orgBy default, PLY tracks the line number and position of all tokens.
26466498Snate@binkert.orgThis information is available using the following functions:
26472632Sstever@eecs.umich.edu
26482632Sstever@eecs.umich.edu<ul>
26494479Sbinkertn@umich.edu<li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em>
26504479Sbinkertn@umich.edu<li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em>
26512632Sstever@eecs.umich.edu</ul>
26522632Sstever@eecs.umich.edu
26532632Sstever@eecs.umich.eduFor example:
26542632Sstever@eecs.umich.edu
26552632Sstever@eecs.umich.edu<blockquote>
26562632Sstever@eecs.umich.edu<pre>
26574479Sbinkertn@umich.edudef p_expression(p):
26582632Sstever@eecs.umich.edu    'expression : expression PLUS expression'
26594479Sbinkertn@umich.edu    line   = p.lineno(2)        # line number of the PLUS token
26604479Sbinkertn@umich.edu    index  = p.lexpos(2)        # Position of the PLUS token
26612632Sstever@eecs.umich.edu</pre>
26622632Sstever@eecs.umich.edu</blockquote>
26632632Sstever@eecs.umich.edu
26646498Snate@binkert.orgAs an optional feature, <tt>yacc.py</tt> can automatically track line
26656498Snate@binkert.orgnumbers and positions for all of the grammar symbols as well.
26666498Snate@binkert.orgHowever, this extra tracking requires extra processing and can
26676498Snate@binkert.orgsignificantly slow down parsing.  Therefore, it must be enabled by
26686498Snate@binkert.orgpassing the
26694479Sbinkertn@umich.edu<tt>tracking=True</tt> option to <tt>yacc.parse()</tt>.  For example:
26704479Sbinkertn@umich.edu
26714479Sbinkertn@umich.edu<blockquote>
26724479Sbinkertn@umich.edu<pre>
26734479Sbinkertn@umich.eduyacc.parse(data,tracking=True)
26744479Sbinkertn@umich.edu</pre>
26754479Sbinkertn@umich.edu</blockquote>
26764479Sbinkertn@umich.edu
26776498Snate@binkert.orgOnce enabled, the <tt>lineno()</tt> and <tt>lexpos()</tt> methods work
26786498Snate@binkert.orgfor all grammar symbols.  In addition, two additional methods can be
26796498Snate@binkert.orgused:
26804479Sbinkertn@umich.edu
26814479Sbinkertn@umich.edu<ul>
26824479Sbinkertn@umich.edu<li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>.
26834479Sbinkertn@umich.edu<li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>.
26844479Sbinkertn@umich.edu</ul>
26854479Sbinkertn@umich.edu
26864479Sbinkertn@umich.eduFor example:
26874479Sbinkertn@umich.edu
26884479Sbinkertn@umich.edu<blockquote>
26894479Sbinkertn@umich.edu<pre>
26904479Sbinkertn@umich.edudef p_expression(p):
26914479Sbinkertn@umich.edu    'expression : expression PLUS expression'
26924479Sbinkertn@umich.edu    p.lineno(1)        # Line number of the left expression
26934479Sbinkertn@umich.edu    p.lineno(2)        # line number of the PLUS operator
26944479Sbinkertn@umich.edu    p.lineno(3)        # line number of the right expression
26954479Sbinkertn@umich.edu    ...
26964479Sbinkertn@umich.edu    start,end = p.linespan(3)    # Start,end lines of the right expression
26974479Sbinkertn@umich.edu    starti,endi = p.lexspan(3)   # Start,end positions of right expression
26984479Sbinkertn@umich.edu
26994479Sbinkertn@umich.edu</pre>
27004479Sbinkertn@umich.edu</blockquote>
27014479Sbinkertn@umich.edu
27024479Sbinkertn@umich.eduNote: The <tt>lexspan()</tt> function only returns the range of values up to the start of the last grammar symbol.  
27034479Sbinkertn@umich.edu
27044479Sbinkertn@umich.edu<p>
27054479Sbinkertn@umich.eduAlthough it may be convenient for PLY to track position information on
27064479Sbinkertn@umich.eduall grammar symbols, this is often unnecessary.  For example, if you
27074479Sbinkertn@umich.eduare merely using line number information in an error message, you can
27084479Sbinkertn@umich.eduoften just key off of a specific token in the grammar rule.  For
27094479Sbinkertn@umich.eduexample:
27104479Sbinkertn@umich.edu
27114479Sbinkertn@umich.edu<blockquote>
27124479Sbinkertn@umich.edu<pre>
27134479Sbinkertn@umich.edudef p_bad_func(p):
27144479Sbinkertn@umich.edu    'funccall : fname LPAREN error RPAREN'
27154479Sbinkertn@umich.edu    # Line number reported from LPAREN token
27164479Sbinkertn@umich.edu    print "Bad function call at line", p.lineno(2)
27174479Sbinkertn@umich.edu</pre>
27184479Sbinkertn@umich.edu</blockquote>
27194479Sbinkertn@umich.edu
27204479Sbinkertn@umich.edu<p>
27216498Snate@binkert.orgSimilarly, you may get better parsing performance if you only
27226498Snate@binkert.orgselectively propagate line number information where it's needed using
27236498Snate@binkert.orgthe <tt>p.set_lineno()</tt> method.  For example:
27244479Sbinkertn@umich.edu
27254479Sbinkertn@umich.edu<blockquote>
27264479Sbinkertn@umich.edu<pre>
27274479Sbinkertn@umich.edudef p_fname(p):
27284479Sbinkertn@umich.edu    'fname : ID'
27296498Snate@binkert.org    p[0] = p[1]
27306498Snate@binkert.org    p.set_lineno(0,p.lineno(1))
27314479Sbinkertn@umich.edu</pre>
27324479Sbinkertn@umich.edu</blockquote>
27334479Sbinkertn@umich.edu
27346498Snate@binkert.orgPLY doesn't retain line number information from rules that have already been
27356498Snate@binkert.orgparsed.   If you are building an abstract syntax tree and need to have line numbers,
27366498Snate@binkert.orgyou should make sure that the line numbers appear in the tree itself.
27376498Snate@binkert.org
27386498Snate@binkert.org<H3><a name="ply_nn34"></a>6.10 AST Construction</H3>
27396498Snate@binkert.org
27406498Snate@binkert.org
27416498Snate@binkert.org<tt>yacc.py</tt> provides no special functions for constructing an
27426498Snate@binkert.orgabstract syntax tree.  However, such construction is easy enough to do
27436498Snate@binkert.orgon your own. 
27446498Snate@binkert.org
27456498Snate@binkert.org<p>A minimal way to construct a tree is to simply create and
27466498Snate@binkert.orgpropagate a tuple or list in each grammar rule function.   There
27476498Snate@binkert.orgare many possible ways to do this, but one example would be something
27486498Snate@binkert.orglike this:
27496498Snate@binkert.org
27506498Snate@binkert.org<blockquote>
27516498Snate@binkert.org<pre>
27526498Snate@binkert.orgdef p_expression_binop(p):
27536498Snate@binkert.org    '''expression : expression PLUS expression
27546498Snate@binkert.org                  | expression MINUS expression
27556498Snate@binkert.org                  | expression TIMES expression
27566498Snate@binkert.org                  | expression DIVIDE expression'''
27576498Snate@binkert.org
27586498Snate@binkert.org    p[0] = ('binary-expression',p[2],p[1],p[3])
27596498Snate@binkert.org
27606498Snate@binkert.orgdef p_expression_group(p):
27616498Snate@binkert.org    'expression : LPAREN expression RPAREN'
27626498Snate@binkert.org    p[0] = ('group-expression',p[2])
27636498Snate@binkert.org
27646498Snate@binkert.orgdef p_expression_number(p):
27656498Snate@binkert.org    'expression : NUMBER'
27666498Snate@binkert.org    p[0] = ('number-expression',p[1])
27676498Snate@binkert.org</pre>
27686498Snate@binkert.org</blockquote>
27696498Snate@binkert.org
27706498Snate@binkert.org<p>
27716498Snate@binkert.orgAnother approach is to create a set of data structure for different
27726498Snate@binkert.orgkinds of abstract syntax tree nodes and assign nodes to <tt>p[0]</tt>
27736498Snate@binkert.orgin each rule.  For example:
27742632Sstever@eecs.umich.edu
27752632Sstever@eecs.umich.edu<blockquote>
27762632Sstever@eecs.umich.edu<pre>
27772632Sstever@eecs.umich.educlass Expr: pass
27782632Sstever@eecs.umich.edu
27792632Sstever@eecs.umich.educlass BinOp(Expr):
27802632Sstever@eecs.umich.edu    def __init__(self,left,op,right):
27812632Sstever@eecs.umich.edu        self.type = "binop"
27822632Sstever@eecs.umich.edu        self.left = left
27832632Sstever@eecs.umich.edu        self.right = right
27842632Sstever@eecs.umich.edu        self.op = op
27852632Sstever@eecs.umich.edu
27862632Sstever@eecs.umich.educlass Number(Expr):
27872632Sstever@eecs.umich.edu    def __init__(self,value):
27882632Sstever@eecs.umich.edu        self.type = "number"
27892632Sstever@eecs.umich.edu        self.value = value
27902632Sstever@eecs.umich.edu
27914479Sbinkertn@umich.edudef p_expression_binop(p):
27922632Sstever@eecs.umich.edu    '''expression : expression PLUS expression
27932632Sstever@eecs.umich.edu                  | expression MINUS expression
27942632Sstever@eecs.umich.edu                  | expression TIMES expression
27952632Sstever@eecs.umich.edu                  | expression DIVIDE expression'''
27962632Sstever@eecs.umich.edu
27974479Sbinkertn@umich.edu    p[0] = BinOp(p[1],p[2],p[3])
27984479Sbinkertn@umich.edu
27994479Sbinkertn@umich.edudef p_expression_group(p):
28002632Sstever@eecs.umich.edu    'expression : LPAREN expression RPAREN'
28014479Sbinkertn@umich.edu    p[0] = p[2]
28024479Sbinkertn@umich.edu
28034479Sbinkertn@umich.edudef p_expression_number(p):
28042632Sstever@eecs.umich.edu    'expression : NUMBER'
28054479Sbinkertn@umich.edu    p[0] = Number(p[1])
28062632Sstever@eecs.umich.edu</pre>
28072632Sstever@eecs.umich.edu</blockquote>
28082632Sstever@eecs.umich.edu
28096498Snate@binkert.orgThe advantage to this approach is that it may make it easier to attach more complicated
28106498Snate@binkert.orgsemantics, type checking, code generation, and other features to the node classes.
28116498Snate@binkert.org
28126498Snate@binkert.org<p>
28136498Snate@binkert.orgTo simplify tree traversal, it may make sense to pick a very generic
28146498Snate@binkert.orgtree structure for your parse tree nodes.  For example:
28152632Sstever@eecs.umich.edu
28162632Sstever@eecs.umich.edu<blockquote>
28172632Sstever@eecs.umich.edu<pre>
28182632Sstever@eecs.umich.educlass Node:
28192632Sstever@eecs.umich.edu    def __init__(self,type,children=None,leaf=None):
28202632Sstever@eecs.umich.edu         self.type = type
28212632Sstever@eecs.umich.edu         if children:
28222632Sstever@eecs.umich.edu              self.children = children
28232632Sstever@eecs.umich.edu         else:
28242632Sstever@eecs.umich.edu              self.children = [ ]
28252632Sstever@eecs.umich.edu         self.leaf = leaf
28262632Sstever@eecs.umich.edu	 
28274479Sbinkertn@umich.edudef p_expression_binop(p):
28282632Sstever@eecs.umich.edu    '''expression : expression PLUS expression
28292632Sstever@eecs.umich.edu                  | expression MINUS expression
28302632Sstever@eecs.umich.edu                  | expression TIMES expression
28312632Sstever@eecs.umich.edu                  | expression DIVIDE expression'''
28322632Sstever@eecs.umich.edu
28334479Sbinkertn@umich.edu    p[0] = Node("binop", [p[1],p[3]], p[2])
28342632Sstever@eecs.umich.edu</pre>
28352632Sstever@eecs.umich.edu</blockquote>
28362632Sstever@eecs.umich.edu
28376498Snate@binkert.org<H3><a name="ply_nn35"></a>6.11 Embedded Actions</H3>
28384479Sbinkertn@umich.edu
28394479Sbinkertn@umich.edu
28404479Sbinkertn@umich.eduThe parsing technique used by yacc only allows actions to be executed at the end of a rule.  For example,
28414479Sbinkertn@umich.edusuppose you have a rule like this:
28424479Sbinkertn@umich.edu
28434479Sbinkertn@umich.edu<blockquote>
28444479Sbinkertn@umich.edu<pre>
28454479Sbinkertn@umich.edudef p_foo(p):
28464479Sbinkertn@umich.edu    "foo : A B C D"
28474479Sbinkertn@umich.edu    print "Parsed a foo", p[1],p[2],p[3],p[4]
28484479Sbinkertn@umich.edu</pre>
28494479Sbinkertn@umich.edu</blockquote>
28504479Sbinkertn@umich.edu
28514479Sbinkertn@umich.edu<p>
28524479Sbinkertn@umich.eduIn this case, the supplied action code only executes after all of the
28534479Sbinkertn@umich.edusymbols <tt>A</tt>, <tt>B</tt>, <tt>C</tt>, and <tt>D</tt> have been
28544479Sbinkertn@umich.eduparsed. Sometimes, however, it is useful to execute small code
28554479Sbinkertn@umich.edufragments during intermediate stages of parsing.  For example, suppose
28564479Sbinkertn@umich.eduyou wanted to perform some action immediately after <tt>A</tt> has
28576498Snate@binkert.orgbeen parsed. To do this, write an empty rule like this:
28584479Sbinkertn@umich.edu
28594479Sbinkertn@umich.edu<blockquote>
28604479Sbinkertn@umich.edu<pre>
28614479Sbinkertn@umich.edudef p_foo(p):
28624479Sbinkertn@umich.edu    "foo : A seen_A B C D"
28634479Sbinkertn@umich.edu    print "Parsed a foo", p[1],p[3],p[4],p[5]
28644479Sbinkertn@umich.edu    print "seen_A returned", p[2]
28654479Sbinkertn@umich.edu
28664479Sbinkertn@umich.edudef p_seen_A(p):
28674479Sbinkertn@umich.edu    "seen_A :"
28684479Sbinkertn@umich.edu    print "Saw an A = ", p[-1]   # Access grammar symbol to left
28694479Sbinkertn@umich.edu    p[0] = some_value            # Assign value to seen_A
28704479Sbinkertn@umich.edu
28714479Sbinkertn@umich.edu</pre>
28724479Sbinkertn@umich.edu</blockquote>
28734479Sbinkertn@umich.edu
28744479Sbinkertn@umich.edu<p>
28754479Sbinkertn@umich.eduIn this example, the empty <tt>seen_A</tt> rule executes immediately
28764479Sbinkertn@umich.eduafter <tt>A</tt> is shifted onto the parsing stack.  Within this
28774479Sbinkertn@umich.edurule, <tt>p[-1]</tt> refers to the symbol on the stack that appears
28784479Sbinkertn@umich.eduimmediately to the left of the <tt>seen_A</tt> symbol.  In this case,
28794479Sbinkertn@umich.eduit would be the value of <tt>A</tt> in the <tt>foo</tt> rule
28804479Sbinkertn@umich.eduimmediately above.  Like other rules, a value can be returned from an
28814479Sbinkertn@umich.eduembedded action by simply assigning it to <tt>p[0]</tt>
28824479Sbinkertn@umich.edu
28834479Sbinkertn@umich.edu<p>
28844479Sbinkertn@umich.eduThe use of embedded actions can sometimes introduce extra shift/reduce conflicts.  For example,
28854479Sbinkertn@umich.eduthis grammar has no conflicts:
28864479Sbinkertn@umich.edu
28874479Sbinkertn@umich.edu<blockquote>
28884479Sbinkertn@umich.edu<pre>
28894479Sbinkertn@umich.edudef p_foo(p):
28904479Sbinkertn@umich.edu    """foo : abcd
28914479Sbinkertn@umich.edu           | abcx"""
28924479Sbinkertn@umich.edu
28934479Sbinkertn@umich.edudef p_abcd(p):
28944479Sbinkertn@umich.edu    "abcd : A B C D"
28954479Sbinkertn@umich.edu
28964479Sbinkertn@umich.edudef p_abcx(p):
28974479Sbinkertn@umich.edu    "abcx : A B C X"
28984479Sbinkertn@umich.edu</pre>
28994479Sbinkertn@umich.edu</blockquote>
29004479Sbinkertn@umich.edu
29014479Sbinkertn@umich.eduHowever, if you insert an embedded action into one of the rules like this,
29024479Sbinkertn@umich.edu
29034479Sbinkertn@umich.edu<blockquote>
29044479Sbinkertn@umich.edu<pre>
29054479Sbinkertn@umich.edudef p_foo(p):
29064479Sbinkertn@umich.edu    """foo : abcd
29074479Sbinkertn@umich.edu           | abcx"""
29084479Sbinkertn@umich.edu
29094479Sbinkertn@umich.edudef p_abcd(p):
29104479Sbinkertn@umich.edu    "abcd : A B C D"
29114479Sbinkertn@umich.edu
29124479Sbinkertn@umich.edudef p_abcx(p):
29134479Sbinkertn@umich.edu    "abcx : A B seen_AB C X"
29144479Sbinkertn@umich.edu
29154479Sbinkertn@umich.edudef p_seen_AB(p):
29164479Sbinkertn@umich.edu    "seen_AB :"
29174479Sbinkertn@umich.edu</pre>
29184479Sbinkertn@umich.edu</blockquote>
29194479Sbinkertn@umich.edu
29206498Snate@binkert.organ extra shift-reduce conflict will be introduced.  This conflict is
29216498Snate@binkert.orgcaused by the fact that the same symbol <tt>C</tt> appears next in
29226498Snate@binkert.orgboth the <tt>abcd</tt> and <tt>abcx</tt> rules.  The parser can either
29236498Snate@binkert.orgshift the symbol (<tt>abcd</tt> rule) or reduce the empty
29246498Snate@binkert.orgrule <tt>seen_AB</tt> (<tt>abcx</tt> rule).
29254479Sbinkertn@umich.edu
29264479Sbinkertn@umich.edu<p>
29274479Sbinkertn@umich.eduA common use of embedded rules is to control other aspects of parsing
29284479Sbinkertn@umich.edusuch as scoping of local variables.  For example, if you were parsing C code, you might
29294479Sbinkertn@umich.eduwrite code like this:
29304479Sbinkertn@umich.edu
29314479Sbinkertn@umich.edu<blockquote>
29324479Sbinkertn@umich.edu<pre>
29334479Sbinkertn@umich.edudef p_statements_block(p):
29344479Sbinkertn@umich.edu    "statements: LBRACE new_scope statements RBRACE"""
29354479Sbinkertn@umich.edu    # Action code
29364479Sbinkertn@umich.edu    ...
29374479Sbinkertn@umich.edu    pop_scope()        # Return to previous scope
29384479Sbinkertn@umich.edu
29394479Sbinkertn@umich.edudef p_new_scope(p):
29404479Sbinkertn@umich.edu    "new_scope :"
29414479Sbinkertn@umich.edu    # Create a new scope for local variables
29424479Sbinkertn@umich.edu    s = new_scope()
29434479Sbinkertn@umich.edu    push_scope(s)
29444479Sbinkertn@umich.edu    ...
29454479Sbinkertn@umich.edu</pre>
29464479Sbinkertn@umich.edu</blockquote>
29474479Sbinkertn@umich.edu
29486498Snate@binkert.orgIn this case, the embedded action <tt>new_scope</tt> executes
29496498Snate@binkert.orgimmediately after a <tt>LBRACE</tt> (<tt>{</tt>) symbol is parsed.
29506498Snate@binkert.orgThis might adjust internal symbol tables and other aspects of the
29516498Snate@binkert.orgparser.  Upon completion of the rule <tt>statements_block</tt>, code
29526498Snate@binkert.orgmight undo the operations performed in the embedded action
29536498Snate@binkert.org(e.g., <tt>pop_scope()</tt>).
29546498Snate@binkert.org
29556498Snate@binkert.org<H3><a name="ply_nn36"></a>6.12 Miscellaneous Yacc Notes</H3>
29564479Sbinkertn@umich.edu
29572632Sstever@eecs.umich.edu
29582632Sstever@eecs.umich.edu<ul>
29594479Sbinkertn@umich.edu<li>The default parsing method is LALR. To use SLR instead, run yacc() as follows:
29604479Sbinkertn@umich.edu
29614479Sbinkertn@umich.edu<blockquote>
29624479Sbinkertn@umich.edu<pre>
29634479Sbinkertn@umich.eduyacc.yacc(method="SLR")
29644479Sbinkertn@umich.edu</pre>
29654479Sbinkertn@umich.edu</blockquote>
29664479Sbinkertn@umich.eduNote: LALR table generation takes approximately twice as long as SLR table generation.   There is no
29674479Sbinkertn@umich.edudifference in actual parsing performance---the same code is used in both cases.   LALR is preferred when working
29684479Sbinkertn@umich.eduwith more complicated grammars since it is more powerful.
29694479Sbinkertn@umich.edu
29704479Sbinkertn@umich.edu<p>
29714479Sbinkertn@umich.edu
29722632Sstever@eecs.umich.edu<li>By default, <tt>yacc.py</tt> relies on <tt>lex.py</tt> for tokenizing.  However, an alternative tokenizer
29732632Sstever@eecs.umich.educan be supplied as follows:
29742632Sstever@eecs.umich.edu
29752632Sstever@eecs.umich.edu<blockquote>
29762632Sstever@eecs.umich.edu<pre>
29772632Sstever@eecs.umich.eduyacc.parse(lexer=x)
29782632Sstever@eecs.umich.edu</pre>
29792632Sstever@eecs.umich.edu</blockquote>
29802632Sstever@eecs.umich.eduin this case, <tt>x</tt> must be a Lexer object that minimally has a <tt>x.token()</tt> method for retrieving the next
29812632Sstever@eecs.umich.edutoken.   If an input string is given to <tt>yacc.parse()</tt>, the lexer must also have an <tt>x.input()</tt> method.
29822632Sstever@eecs.umich.edu
29832632Sstever@eecs.umich.edu<p>
29842632Sstever@eecs.umich.edu<li>By default, the yacc generates tables in debugging mode (which produces the parser.out file and other output).
29852632Sstever@eecs.umich.eduTo disable this, use
29862632Sstever@eecs.umich.edu
29872632Sstever@eecs.umich.edu<blockquote>
29882632Sstever@eecs.umich.edu<pre>
29892632Sstever@eecs.umich.eduyacc.yacc(debug=0)
29902632Sstever@eecs.umich.edu</pre>
29912632Sstever@eecs.umich.edu</blockquote>
29922632Sstever@eecs.umich.edu
29932632Sstever@eecs.umich.edu<p>
29942632Sstever@eecs.umich.edu<li>To change the name of the <tt>parsetab.py</tt> file,  use:
29952632Sstever@eecs.umich.edu
29962632Sstever@eecs.umich.edu<blockquote>
29972632Sstever@eecs.umich.edu<pre>
29982632Sstever@eecs.umich.eduyacc.yacc(tabmodule="foo")
29992632Sstever@eecs.umich.edu</pre>
30002632Sstever@eecs.umich.edu</blockquote>
30012632Sstever@eecs.umich.edu
30024479Sbinkertn@umich.edu<p>
30034479Sbinkertn@umich.edu<li>To change the directory in which the <tt>parsetab.py</tt> file (and other output files) are written, use:
30044479Sbinkertn@umich.edu<blockquote>
30054479Sbinkertn@umich.edu<pre>
30064479Sbinkertn@umich.eduyacc.yacc(tabmodule="foo",outputdir="somedirectory")
30074479Sbinkertn@umich.edu</pre>
30084479Sbinkertn@umich.edu</blockquote>
30094479Sbinkertn@umich.edu
30104479Sbinkertn@umich.edu<p>
30114479Sbinkertn@umich.edu<li>To prevent yacc from generating any kind of parser table file, use:
30124479Sbinkertn@umich.edu<blockquote>
30134479Sbinkertn@umich.edu<pre>
30144479Sbinkertn@umich.eduyacc.yacc(write_tables=0)
30154479Sbinkertn@umich.edu</pre>
30164479Sbinkertn@umich.edu</blockquote>
30174479Sbinkertn@umich.edu
30184479Sbinkertn@umich.eduNote: If you disable table generation, yacc() will regenerate the parsing tables
30194479Sbinkertn@umich.edueach time it runs (which may take awhile depending on how large your grammar is).
30204479Sbinkertn@umich.edu
30212632Sstever@eecs.umich.edu<P>
30222632Sstever@eecs.umich.edu<li>To print copious amounts of debugging during parsing, use:
30232632Sstever@eecs.umich.edu
30242632Sstever@eecs.umich.edu<blockquote>
30252632Sstever@eecs.umich.edu<pre>
30266498Snate@binkert.orgyacc.parse(debug=1)     
30274479Sbinkertn@umich.edu</pre>
30284479Sbinkertn@umich.edu</blockquote>
30294479Sbinkertn@umich.edu
30304479Sbinkertn@umich.edu<p>
30312632Sstever@eecs.umich.edu<li>The <tt>yacc.yacc()</tt> function really returns a parser object.  If you want to support multiple
30322632Sstever@eecs.umich.eduparsers in the same application, do this:
30332632Sstever@eecs.umich.edu
30342632Sstever@eecs.umich.edu<blockquote>
30352632Sstever@eecs.umich.edu<pre>
30362632Sstever@eecs.umich.edup = yacc.yacc()
30372632Sstever@eecs.umich.edu...
30382632Sstever@eecs.umich.edup.parse()
30392632Sstever@eecs.umich.edu</pre>
30402632Sstever@eecs.umich.edu</blockquote>
30412632Sstever@eecs.umich.edu
30422632Sstever@eecs.umich.eduNote: The function <tt>yacc.parse()</tt> is bound to the last parser that was generated.
30432632Sstever@eecs.umich.edu
30442632Sstever@eecs.umich.edu<p>
30454479Sbinkertn@umich.edu<li>Since the generation of the LALR tables is relatively expensive, previously generated tables are
30462632Sstever@eecs.umich.educached and reused if possible.  The decision to regenerate the tables is determined by taking an MD5
30472632Sstever@eecs.umich.educhecksum of all grammar rules and precedence rules.  Only in the event of a mismatch are the tables regenerated.
30482632Sstever@eecs.umich.edu
30492632Sstever@eecs.umich.edu<p>
30502632Sstever@eecs.umich.eduIt should be noted that table generation is reasonably efficient, even for grammars that involve around a 100 rules
30512632Sstever@eecs.umich.eduand several hundred states.  For more complex languages such as C, table generation may take 30-60 seconds on a slow
30522632Sstever@eecs.umich.edumachine.  Please be patient.
30532632Sstever@eecs.umich.edu
30542632Sstever@eecs.umich.edu<p>
30554479Sbinkertn@umich.edu<li>Since LR parsing is driven by tables, the performance of the parser is largely independent of the
30564479Sbinkertn@umich.edusize of the grammar.   The biggest bottlenecks will be the lexer and the complexity of the code in your grammar rules.
30572632Sstever@eecs.umich.edu</ul>
30582632Sstever@eecs.umich.edu
30596498Snate@binkert.org<H2><a name="ply_nn37"></a>7. Multiple Parsers and Lexers</H2>
30604479Sbinkertn@umich.edu
30612632Sstever@eecs.umich.edu
30622632Sstever@eecs.umich.eduIn advanced parsing applications, you may want to have multiple
30636498Snate@binkert.orgparsers and lexers. 
30642632Sstever@eecs.umich.edu
30652632Sstever@eecs.umich.edu<p>
30666498Snate@binkert.orgAs a general rules this isn't a problem.   However, to make it work,
30676498Snate@binkert.orgyou need to carefully make sure everything gets hooked up correctly.
30686498Snate@binkert.orgFirst, make sure you save the objects returned by <tt>lex()</tt> and
30696498Snate@binkert.org<tt>yacc()</tt>.  For example:
30702632Sstever@eecs.umich.edu
30712632Sstever@eecs.umich.edu<blockquote>
30722632Sstever@eecs.umich.edu<pre>
30732632Sstever@eecs.umich.edulexer  = lex.lex()       # Return lexer object
30742632Sstever@eecs.umich.eduparser = yacc.yacc()     # Return parser object
30752632Sstever@eecs.umich.edu</pre>
30762632Sstever@eecs.umich.edu</blockquote>
30772632Sstever@eecs.umich.edu
30786498Snate@binkert.orgNext, when parsing, make sure you give the <tt>parse()</tt> function a reference to the lexer it
30796498Snate@binkert.orgshould be using.  For example:
30804479Sbinkertn@umich.edu
30814479Sbinkertn@umich.edu<blockquote>
30824479Sbinkertn@umich.edu<pre>
30834479Sbinkertn@umich.eduparser.parse(text,lexer=lexer)
30844479Sbinkertn@umich.edu</pre>
30854479Sbinkertn@umich.edu</blockquote>
30864479Sbinkertn@umich.edu
30876498Snate@binkert.orgIf you forget to do this, the parser will use the last lexer
30886498Snate@binkert.orgcreated--which is not always what you want.
30896498Snate@binkert.org
30906498Snate@binkert.org<p>
30916498Snate@binkert.orgWithin lexer and parser rule functions, these objects are also
30926498Snate@binkert.orgavailable.  In the lexer, the "lexer" attribute of a token refers to
30936498Snate@binkert.orgthe lexer object that triggered the rule. For example:
30942632Sstever@eecs.umich.edu
30952632Sstever@eecs.umich.edu<blockquote>
30962632Sstever@eecs.umich.edu<pre>
30972632Sstever@eecs.umich.edudef t_NUMBER(t):
30982632Sstever@eecs.umich.edu   r'\d+'
30992632Sstever@eecs.umich.edu   ...
31002632Sstever@eecs.umich.edu   print t.lexer           # Show lexer object
31012632Sstever@eecs.umich.edu</pre>
31022632Sstever@eecs.umich.edu</blockquote>
31032632Sstever@eecs.umich.edu
31042632Sstever@eecs.umich.eduIn the parser, the "lexer" and "parser" attributes refer to the lexer
31052632Sstever@eecs.umich.eduand parser objects respectively.
31062632Sstever@eecs.umich.edu
31072632Sstever@eecs.umich.edu<blockquote>
31082632Sstever@eecs.umich.edu<pre>
31094479Sbinkertn@umich.edudef p_expr_plus(p):
31102632Sstever@eecs.umich.edu   'expr : expr PLUS expr'
31112632Sstever@eecs.umich.edu   ...
31124479Sbinkertn@umich.edu   print p.parser          # Show parser object
31134479Sbinkertn@umich.edu   print p.lexer           # Show lexer object
31142632Sstever@eecs.umich.edu</pre>
31152632Sstever@eecs.umich.edu</blockquote>
31162632Sstever@eecs.umich.edu
31172632Sstever@eecs.umich.eduIf necessary, arbitrary attributes can be attached to the lexer or parser object.
31182632Sstever@eecs.umich.eduFor example, if you wanted to have different parsing modes, you could attach a mode
31192632Sstever@eecs.umich.eduattribute to the parser object and look at it later.
31202632Sstever@eecs.umich.edu
31216498Snate@binkert.org<H2><a name="ply_nn38"></a>8. Using Python's Optimized Mode</H2>
31224479Sbinkertn@umich.edu
31232632Sstever@eecs.umich.edu
31242632Sstever@eecs.umich.eduBecause PLY uses information from doc-strings, parsing and lexing
31252632Sstever@eecs.umich.eduinformation must be gathered while running the Python interpreter in
31262632Sstever@eecs.umich.edunormal mode (i.e., not with the -O or -OO options).  However, if you
31272632Sstever@eecs.umich.eduspecify optimized mode like this:
31282632Sstever@eecs.umich.edu
31292632Sstever@eecs.umich.edu<blockquote>
31302632Sstever@eecs.umich.edu<pre>
31312632Sstever@eecs.umich.edulex.lex(optimize=1)
31322632Sstever@eecs.umich.eduyacc.yacc(optimize=1)
31332632Sstever@eecs.umich.edu</pre>
31342632Sstever@eecs.umich.edu</blockquote>
31352632Sstever@eecs.umich.edu
31362632Sstever@eecs.umich.eduthen PLY can later be used when Python runs in optimized mode. To make this work,
31372632Sstever@eecs.umich.edumake sure you first run Python in normal mode.  Once the lexing and parsing tables
31382632Sstever@eecs.umich.eduhave been generated the first time, run Python in optimized mode. PLY will use
31392632Sstever@eecs.umich.eduthe tables without the need for doc strings.
31402632Sstever@eecs.umich.edu
31412632Sstever@eecs.umich.edu<p>
31422632Sstever@eecs.umich.eduBeware: running PLY in optimized mode disables a lot of error
31432632Sstever@eecs.umich.educhecking.  You should only do this when your project has stabilized
31446498Snate@binkert.organd you don't need to do any debugging.   One of the purposes of
31456498Snate@binkert.orgoptimized mode is to substantially decrease the startup time of
31466498Snate@binkert.orgyour compiler (by assuming that everything is already properly
31476498Snate@binkert.orgspecified and works).
31486498Snate@binkert.org
31496498Snate@binkert.org<H2><a name="ply_nn44"></a>9. Advanced Debugging</H2>
31506498Snate@binkert.org
31516498Snate@binkert.org
31526498Snate@binkert.org<p>
31536498Snate@binkert.orgDebugging a compiler is typically not an easy task. PLY provides some
31546498Snate@binkert.orgadvanced diagonistic capabilities through the use of Python's
31556498Snate@binkert.org<tt>logging</tt> module.   The next two sections describe this:
31566498Snate@binkert.org
31576498Snate@binkert.org<H3><a name="ply_nn45"></a>9.1 Debugging the lex() and yacc() commands</H3>
31586498Snate@binkert.org
31596498Snate@binkert.org
31606498Snate@binkert.org<p>
31616498Snate@binkert.orgBoth the <tt>lex()</tt> and <tt>yacc()</tt> commands have a debugging
31626498Snate@binkert.orgmode that can be enabled using the <tt>debug</tt> flag.  For example:
31636498Snate@binkert.org
31646498Snate@binkert.org<blockquote>
31656498Snate@binkert.org<pre>
31666498Snate@binkert.orglex.lex(debug=True)
31676498Snate@binkert.orgyacc.yacc(debug=True)
31686498Snate@binkert.org</pre>
31696498Snate@binkert.org</blockquote>
31706498Snate@binkert.org
31716498Snate@binkert.orgNormally, the output produced by debugging is routed to either
31726498Snate@binkert.orgstandard error or, in the case of <tt>yacc()</tt>, to a file
31736498Snate@binkert.org<tt>parser.out</tt>.  This output can be more carefully controlled
31746498Snate@binkert.orgby supplying a logging object.  Here is an example that adds
31756498Snate@binkert.orginformation about where different debugging messages are coming from:
31766498Snate@binkert.org
31776498Snate@binkert.org<blockquote>
31786498Snate@binkert.org<pre>
31796498Snate@binkert.org# Set up a logging object
31806498Snate@binkert.orgimport logging
31816498Snate@binkert.orglogging.basicConfig(
31826498Snate@binkert.org    level = logging.DEBUG,
31836498Snate@binkert.org    filename = "parselog.txt",
31846498Snate@binkert.org    filemode = "w",
31856498Snate@binkert.org    format = "%(filename)10s:%(lineno)4d:%(message)s"
31866498Snate@binkert.org)
31876498Snate@binkert.orglog = logging.getLogger()
31886498Snate@binkert.org
31896498Snate@binkert.orglex.lex(debug=True,debuglog=log)
31906498Snate@binkert.orgyacc.yacc(debug=True,debuglog=log)
31916498Snate@binkert.org</pre>
31926498Snate@binkert.org</blockquote>
31936498Snate@binkert.org
31946498Snate@binkert.orgIf you supply a custom logger, the amount of debugging
31956498Snate@binkert.orginformation produced can be controlled by setting the logging level.
31966498Snate@binkert.orgTypically, debugging messages are either issued at the <tt>DEBUG</tt>,
31976498Snate@binkert.org<tt>INFO</tt>, or <tt>WARNING</tt> levels.
31986498Snate@binkert.org
31996498Snate@binkert.org<p>
32006498Snate@binkert.orgPLY's error messages and warnings are also produced using the logging
32016498Snate@binkert.orginterface.  This can be controlled by passing a logging object
32026498Snate@binkert.orgusing the <tt>errorlog</tt> parameter.
32036498Snate@binkert.org
32046498Snate@binkert.org<blockquote>
32056498Snate@binkert.org<pre>
32066498Snate@binkert.orglex.lex(errorlog=log)
32076498Snate@binkert.orgyacc.yacc(errorlog=log)
32086498Snate@binkert.org</pre>
32096498Snate@binkert.org</blockquote>
32106498Snate@binkert.org
32116498Snate@binkert.orgIf you want to completely silence warnings, you can either pass in a
32126498Snate@binkert.orglogging object with an appropriate filter level or use the <tt>NullLogger</tt>
32136498Snate@binkert.orgobject defined in either <tt>lex</tt> or <tt>yacc</tt>.  For example:
32146498Snate@binkert.org
32156498Snate@binkert.org<blockquote>
32166498Snate@binkert.org<pre>
32176498Snate@binkert.orgyacc.yacc(errorlog=yacc.NullLogger())
32186498Snate@binkert.org</pre>
32196498Snate@binkert.org</blockquote>
32206498Snate@binkert.org
32216498Snate@binkert.org<H3><a name="ply_nn46"></a>9.2 Run-time Debugging</H3>
32226498Snate@binkert.org
32236498Snate@binkert.org
32246498Snate@binkert.org<p>
32256498Snate@binkert.orgTo enable run-time debugging of a parser, use the <tt>debug</tt> option to parse. This
32266498Snate@binkert.orgoption can either be an integer (which simply turns debugging on or off) or an instance
32276498Snate@binkert.orgof a logger object. For example:
32286498Snate@binkert.org
32296498Snate@binkert.org<blockquote>
32306498Snate@binkert.org<pre>
32316498Snate@binkert.orglog = logging.getLogger()
32326498Snate@binkert.orgparser.parse(input,debug=log)
32336498Snate@binkert.org</pre>
32346498Snate@binkert.org</blockquote>
32356498Snate@binkert.org
32366498Snate@binkert.orgIf a logging object is passed, you can use its filtering level to control how much
32376498Snate@binkert.orgoutput gets generated.   The <tt>INFO</tt> level is used to produce information
32386498Snate@binkert.orgabout rule reductions.  The <tt>DEBUG</tt> level will show information about the
32396498Snate@binkert.orgparsing stack, token shifts, and other details.  The <tt>ERROR</tt> level shows information
32406498Snate@binkert.orgrelated to parsing errors.
32416498Snate@binkert.org
32426498Snate@binkert.org<p>
32436498Snate@binkert.orgFor very complicated problems, you should pass in a logging object that
32446498Snate@binkert.orgredirects to a file where you can more easily inspect the output after
32456498Snate@binkert.orgexecution.
32466498Snate@binkert.org
32476498Snate@binkert.org<H2><a name="ply_nn39"></a>10. Where to go from here?</H2>
32484479Sbinkertn@umich.edu
32492632Sstever@eecs.umich.edu
32502632Sstever@eecs.umich.eduThe <tt>examples</tt> directory of the PLY distribution contains several simple examples.   Please consult a
32512632Sstever@eecs.umich.educompilers textbook for the theory and underlying implementation details or LR parsing.
32522632Sstever@eecs.umich.edu
32532632Sstever@eecs.umich.edu</body>
32542632Sstever@eecs.umich.edu</html>
32552632Sstever@eecs.umich.edu
32562632Sstever@eecs.umich.edu
32572632Sstever@eecs.umich.edu
32582632Sstever@eecs.umich.edu
32592632Sstever@eecs.umich.edu
32602632Sstever@eecs.umich.edu
32612632Sstever@eecs.umich.edu
3262