亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來(lái)到蟲(chóng)蟲(chóng)下載站! | ?? 資源下載 ?? 資源專(zhuān)輯 ?? 關(guān)于我們
? 蟲(chóng)蟲(chóng)下載站

?? tply.doc

?? YACC和LEX的資料書(shū)籍 以及一些源碼 很有幫助
?? DOC
?? 第 1 頁(yè) / 共 4 頁(yè)
字號(hào):


      TP Lex and Yacc - The Compiler Writer's Tools for Turbo Pascal
      == === === ==== = === ======== ======== ===== === ===== ======

                     Version 3.0 User Manual
                     ======= === ==== ======

                         Albert Graef
                       Schillerstr. 18
                       6509 Schornsheim

               ag@informatik.mathematik.uni-mainz.de

                        June 17, 1991


Introduction
============

This document describes the TP Lex and Yacc compiler generator toolset.
These tools are designed especially to help you prepare compilers and
similar programs like text processing utilities and command language
interpreters with the Turbo Pascal (TM) programming language.

TP Lex and Yacc are Turbo Pascal adaptions of the well-known UNIX (TM)
utilities Lex and Yacc, which were written by M.E. Lesk and S.C. Johnson
at Bell Laboratories, and are used with the C programming language. TP Lex
and Yacc are intended to be approximately "compatible" with these programs.
However, they are an independent development of the author, based on the
techniques described in the famous "dragon book" of Aho, Sethi and Ullman
(Aho, Sethi, Ullman: "Compilers : principles, techniques and tools," Reading
(Mass.), Addison-Wesley, 1986).

TP Lex and Yacc, like any other tools of this kind, are not intended for
novices or casual programmers; they require extensive programming experience
as well as a thorough understanding of the principles of parser design and
implementation to be put to work successfully. But if you are a seasoned
Turbo Pascal programmer with some background in compiler design and formal
language theory, you will almost certainly find TP Lex and Yacc to be a
powerful extension of your Turbo Pascal toolset.

This manual tells you how to get started with the TP Lex and Yacc programs
and provides a short description of these programs. Some knowledge about
the C versions of Lex and Yacc will be useful, although not strictly
necessary. For further reading, you may also refer to:

- Aho, Sethi and Ullman: "Compilers : principles, techniques and tools."
  Reading (Mass.), Addison-Wesley, 1986.

- Johnson, S.C.: "Yacc - yet another compiler-compiler." CSTR-32, Bell
  Telephone Laboratories, 1974.

- Lesk, M.E.: "Lex - a lexical analyser generator." CSTR-39, Bell Telephone
  Laboratories, 1975.

- Schreiner, Friedman: "Introduction to compiler construction with UNIX."
  Prentice-Hall, 1985.

- The Unix Programmer's Manual, Sections `Lex' and `Yacc'.


Getting Started
---------------

The TP Lex and Yacc programs run on IBM PC compatible computers with MS-
DOS 3.0 (or later) and Turbo Pascal compiler (Version 4.0 or later). Your
system should have at least 512 KB RAM; a hard disk is recommended, while
not strictly necessary.

To install TP Lex and Yacc on your system, simply copy the files on the
distribution disk to an appropriate directory on your hard disk. Then
put this directory on your DOS PATH and Turbo Pascal's unit search path
(such that the Turbo Pascal compiler finds the TP Lex and Yacc library
units).

The library units (.TPU files) in the distribution are compiled with
Turbo Pascal 6.0. If you are using a different Turbo Pascal version,
you will have to recompile these units (sources are provided in the
corresponding LEXLIB.PAS and YACCLIB.PAS files).

Having installed TP Lex and Yacc on your system, you can now compile
your first TP Lex and Yacc program EXPR. EXPR is a simple desktop
calculator program which consists of a lexical analyzer in the TP Lex
source file EXPRLEX.L and the parser and main program in the TP Yacc
source file EXPR.Y. To compile these programs, issue the commands

   lex exprlex
   yacc expr

That's it! You now have the Turbo Pascal sources (EXPRLEX.PAS and EXPR.PAS)
for the EXPR program. Use the Turbo Pascal compiler to compile these
programs as follows:

   tpc expr

You can now execute the EXPR program and type some expressions to see it
work (terminate the program with an empty line). There is a number of other
sample TP Lex and Yacc programs (.L and .Y files) on the distribution
disk, including a TP Yacc cross reference utility and a complete parser
for Standard Pascal.

The TP Lex and Yacc programs recognize some options which may be specified
anywhere on the command line. E.g.,

   lex /o exprlex

runs TP Lex with "DFA optimization" and

   yacc /v expr

runs TP Yacc in "verbose" mode (TP Yacc generates a readable description
of the generated parser).

The TP Lex and Yacc programs use the following default filename extensions:
- .L:   TP Lex input files
- .Y:   TP Yacc input files
- .PAS: TP Lex and Yacc output files

As usual, you may overwrite default filename extensions by explicitly
specifying suffixes.

If you ever forget how to run TP Lex and Yacc, you can issue the command

   lex

or

   yacc

without arguments to get a short summary of the command line syntax.



TP Lex
======

This section describes the TP Lex lexical analyzer generator.


Usage
-----

LEX [options] lex-file[.L] [output-file[.PAS]]


Options
-------

/v  "Verbose:" Lex generates a readable description of the generated
    lexical analyzer, written to lex-file with new extension .LST.

/o  "Optimize:" Lex optimizes DFA tables to produce a minimal DFA.


Description
-----------

TP Lex is a program generator that is used to generate the Turbo Pascal
source code for a lexical analyzer subroutine from the specification
of an input language by a regular expression grammar.

TP Lex parses the source grammar contained in lex-file (with default
suffix .L) and writes the constructed lexical analyzer subroutine
to the specified output-file (with default suffix .PAS); if no output
file is specified, output goes to lex-file with new suffix .PAS.
If any errors are found during compilation, error messages are written
to the list file (lex-file with new suffix .LST).

The generated output file contains a lexical analyzer routine, yylex,
implemented as:

  function yylex : Integer;

This routine has to be called by your main program to execute the lexical
analyzer. The return value of the yylex routine usually denotes the number
of a token recognized by the lexical analyzer (see the return routine
in the LexLib unit). At end-of-file the yylex routine normally returns 0.

The code template for the yylex routine may be found in the YYLEX.COD
file. This file is needed by TP Lex when it constructs the output file.
It must be present either in the current directory or in the directory
from which TP Lex was executed (TP Lex searches these directories in
the indicated order).

The TP Lex library (LexLib) unit is required by programs using Lex-generated
lexical analyzers; you will therefore have to put an appropriate uses clause
into your program or unit that contains the lexical analyzer routine. The
LexLib unit also provides various useful utility routines; see the file
LEXLIB.PAS for further information.


Lex Source
----------

A TP Lex program consists of three sections separated with the %% delimiter:

definitions
%%
rules
%%
auxiliary procedures

All sections may be empty. The TP Lex language is line-oriented; definitions
and rules are separated by line breaks. There is no special notation for
comments, but (Turbo Pascal style) comments may be included as Turbo Pascal
fragments (see below).

The definitions section may contain the following elements:

- regular definitions in the format:

     name   substitution

  which serve to abbreviate common subexpressions. The {name} notation
  causes the corresponding substitution from the definitions section to
  be inserted into a regular expression. The name must be a legal
  identifier (letter followed by a sequence of letters and digits;
  the underscore counts as a letter; upper- and lowercase are distinct).
  Regular definitions must be non-recursive.

- start state definitions in the format:

     %start name ...

  which are used in specifying start conditions on rules (described
  below). The %start keyword may also be abbreviated as %s or %S.

- Turbo Pascal declarations enclosed between %{ and %}. These will be
  inserted into the output file (at global scope). Also, any line that
  does not look like a Lex definition (e.g., starts with blank or tab)
  will be treated as Turbo Pascal code. (In particular, this also allows
  you to include Turbo Pascal comments in your Lex program.)

The rules section of a TP Lex program contains the actual specification of
the lexical analyzer routine. It may be thought of as a big CASE statement
discriminating over the different patterns to be matched and listing the
corresponding statements (actions) to be executed. Each rule consists of a
regular expression describing the strings to be matched in the input, and a
corresponding action, a Turbo Pascal statement to be executed when the
expression matches. Expression and statement are delimited with whitespace
(blanks and/or tabs). Thus the format of a Lex grammar rule is:

   expression      statement;

Note that the action must be a single Turbo Pascal statement terminated
with a semicolon (use begin ... end for compound statements). The statement
may span multiple lines if the successor lines are indented with at least
one blank or tab. The action may also be replaced by the | character,
indicating that the action for this rule is the same as that for the next
one.

The TP Lex library unit provides various variables and routines which are
useful in the programming of actions. In particular, the yytext string
variable holds the text of the matched string, and the yyleng Byte variable
its length.

Regular expressions are used to describe the strings to be matched in a
grammar rule. They are built from the usual constructs describing character
classes and sequences, and operators specifying repetitions and alternatives.
The precise format of regular expressions is described in the next section.

The rules section may also start with some Turbo Pascal declarations
(enclosed in %{ %}) which are treated as local declarations of the
actions routine.

Finally, the auxiliary procedures section may contain arbitrary Turbo
Pascal code (such as supporting routines or a main program) which is
simply tacked on to the end of the output file. The auxiliary procedures
section is optional.


Regular Expressions
-------------------

The following table summarizes the format of the regular expressions
recognized by TP Lex (also compare Aho, Sethi, Ullman 1986, fig. 3.48).
c stands for a single character, s for a string, r for a regular expression,
and n,m for nonnegative integers.

expression   matches                        example
----------   ----------------------------   -------
c            any non-operator character c   a
\c           character c literally          \*
"s"          string s literally             "**"
.            any character but newline      a.*b
^            beginning of line              ^abc
$            end of line                    abc$
[s]          any character in s             [abc]
[^s]         any character not in s         [^abc]
r*           zero or more r's               a*
r+           one or more r's                a+
r?           zero or one r                  a?
r{m,n}       m to n occurrences of r        a{1,5}
r{m}         m occurrences of r             a{5}
r1r2         r1 then r2                     ab
r1|r2        r1 or r2                       a|b
(r)          r                              (a|b)
r1/r2        r1 when followed by r2         a/b
<x>r         r when in start condition x    <x>abc
---------------------------------------------------

The operators *, +, ? and {} have highest precedence, followed by
concatenation. The | operator has lowest precedence. Parentheses ()
may be used to group expressions and overwrite default precedences.
The <> and / operators may only occur once in an expression.

The usual C-like escapes are recognized:

\n     denotes newline
\r     denotes carriage return
\t     denotes tab
\b     denotes backspace
\f     denotes form feed
\NNN   denotes character no. NNN in octal base

You can also use the \ character to quote characters which would otherwise
be interpreted as operator symbols. In character classes, you may use
the - character to denote ranges of characters. For instance, [a-z]
denotes the class of all lowercase letters.

The expressions in a TP Lex program may be ambigious, i.e. there may be inputs
which match more than one rule. In such a case, the lexical analyzer prefers
the longest match and, if it still has the choice between different rules,
it picks the first of these. If no rule matches, the lexical analyzer
executes a default action which consists of copying the input character
to the output unchanged. Thus, if the purpose of a lexical analyzer is
to translate some parts of the input, and leave the rest unchanged, you
only have to specify the patterns which have to be treated specially. If,
however, the lexical analyzer has to absorb its whole input, you will have
to provide rules that match everything. E.g., you might use the rules

   .   |
   \n  ;

which match "any other character" (and ignore it).

Sometimes certain patterns have to be analyzed differently depending on some
amount of context in which the pattern appears. In such a case the / operator
is useful. For instance, the expression a/b matches a, but only if followed
by b. Note that the b does not belong to the match; rather, the lexical
analyzer, when matching an a, will look ahead in the input to see whether
it is followed by a b, before it declares that it has matched an a. Such
lookahead may be arbitrarily complex (up to the size of the LexLib input
buffer). E.g., the pattern a/.*b matches an a which is followed by a b
somewhere on the same input line. TP Lex also has a means to specify left
context which is described in the next section.


Start Conditions
----------------

TP Lex provides some features which make it possible to handle left context.
The ^ character at the beginning of a regular expression may be used to
denote the beginning of the line. More distant left context can be described
conveniently by using start conditions on rules.

Any rule which is prefixed with the <> construct is only valid if the lexical
analyzer is in the denoted start state. For instance, the expression <x>a
can only be matched if the lexical analyzer is in start state x. You can have
multiple start states in a rule; e.g., <x,y>a can be matched in start states
x or y.

Start states have to be declared in the definitions section by means of
one or more start state definitions (see above). The lexical analyzer enters
a start state through a call to the LexLib routine start. E.g., you may
write:

%start x y
%%
<x>a    start(y);
<y>b    start(x);
%%
begin
  start(x); if yylex=0 then ;
end.

Upon initialization, the lexical analyzer is put into state x. It then

?? 快捷鍵說(shuō)明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产精品萝li| 日日夜夜一区二区| 在线不卡a资源高清| 国产激情一区二区三区四区 | 亚洲欧洲www| 欧美电影影音先锋| 91丨九色丨蝌蚪丨老版| 黑人巨大精品欧美黑白配亚洲| 亚洲色图自拍偷拍美腿丝袜制服诱惑麻豆| 欧美一区二区三区四区五区 | 日韩欧美一二区| 91蜜桃传媒精品久久久一区二区| 久久99精品久久久久婷婷| 曰韩精品一区二区| 国产精品久久久久影院色老大| 欧美电影影音先锋| 91国偷自产一区二区使用方法| 成人在线综合网站| 国内欧美视频一区二区| 琪琪一区二区三区| 天天亚洲美女在线视频| 亚洲一区二三区| 亚洲精品免费一二三区| 国产精品久久久久天堂| 久久精品视频免费观看| 精品国产伦一区二区三区免费| 欧美精品777| 欧美三级日本三级少妇99| 91麻豆精品视频| 成人av网在线| 波多野结衣中文字幕一区| 风间由美一区二区三区在线观看| 国产呦萝稀缺另类资源| 精品一区二区三区久久久| 亚洲国产wwwccc36天堂| 亚洲大型综合色站| 亚洲亚洲精品在线观看| 亚洲精品中文字幕乱码三区| 1000部国产精品成人观看| 国产精品伦理一区二区| 国产精品久久久久久久久晋中 | 成人免费毛片a| 丰满少妇在线播放bd日韩电影| 经典三级一区二区| 国产一区二区视频在线| 91麻豆文化传媒在线观看| 国产suv精品一区二区三区| 国产成人在线观看| gogogo免费视频观看亚洲一| 国产99一区视频免费| 成人美女在线观看| 一本在线高清不卡dvd| 色域天天综合网| 欧美日韩激情一区二区| 91精品欧美综合在线观看最新| 日韩小视频在线观看专区| 精品久久久久99| 欧美激情一区在线观看| 亚洲视频一区二区在线| 亚洲国产婷婷综合在线精品| 日本女人一区二区三区| 国产一本一道久久香蕉| www.欧美.com| 欧美丝袜丝交足nylons图片| 欧美一区二区三区成人| 欧美精品一区二区三区蜜桃| 欧美激情艳妇裸体舞| 亚洲美女区一区| 三级一区在线视频先锋| 麻豆freexxxx性91精品| 国产成人午夜99999| eeuss鲁一区二区三区| 欧美亚洲国产一卡| 日韩欧美电影一区| 国产精品婷婷午夜在线观看| 一区二区三区在线高清| 奇米在线7777在线精品| 成人免费视频播放| 欧美日韩精品电影| 国产亚洲欧美一区在线观看| 亚洲精选免费视频| 精品一区二区三区久久久| 91蜜桃视频在线| 精品伦理精品一区| 亚洲男人都懂的| 久久99国产精品久久99果冻传媒| www.亚洲人| 欧美一区二区免费观在线| 亚洲国产精品国自产拍av| 亚洲成av人在线观看| 国产精品99久久久久| 欧美日韩五月天| 中文子幕无线码一区tr| 日本中文一区二区三区| 99久精品国产| 久久综合九色综合欧美就去吻| 亚洲一区在线视频| 国产传媒日韩欧美成人| 7777精品伊人久久久大香线蕉完整版 | 亚洲精品美国一| 国产乱一区二区| 欧美日韩一区二区不卡| 国产精品无圣光一区二区| 日韩电影免费在线观看网站| 91影院在线观看| 久久久国际精品| 午夜久久久久久| 一本久道中文字幕精品亚洲嫩| 欧美成人bangbros| 偷拍与自拍一区| 91视频观看免费| 国产欧美日韩视频一区二区 | 日韩高清不卡一区二区| 97精品视频在线观看自产线路二| 亚洲美女电影在线| 国产大片一区二区| 精品入口麻豆88视频| 天堂va蜜桃一区二区三区| 91久久精品网| 成人免费在线视频观看| 国产成人综合亚洲91猫咪| 日韩午夜精品电影| 日韩激情一二三区| 欧美日韩免费不卡视频一区二区三区| 亚洲欧洲日韩在线| 成人av中文字幕| 国产欧美一区二区三区沐欲| 国内久久精品视频| 精品精品国产高清a毛片牛牛| 亚洲成av人片在线观看无码| 91国偷自产一区二区三区成为亚洲经典| 国产欧美日韩一区二区三区在线观看 | 26uuu亚洲婷婷狠狠天堂| 日本不卡一区二区| 91精品福利在线一区二区三区| 亚洲大片免费看| 91麻豆精品国产91久久久| 丝袜亚洲另类欧美| 91精品一区二区三区久久久久久| 午夜国产不卡在线观看视频| 欧美日韩情趣电影| 天天综合网 天天综合色| 911精品国产一区二区在线| 日本va欧美va瓶| 日韩精品一区二区三区老鸭窝| 久久99久久99小草精品免视看| 日韩美女天天操| 国产精品影音先锋| 国产精品久久久久久久久免费桃花 | 欧美高清你懂得| 美腿丝袜在线亚洲一区| 欧美电影免费观看高清完整版在 | 一区二区高清在线| 欧美日韩在线播放三区四区| 天天射综合影视| 日韩欧美卡一卡二| 久久99精品久久久久久动态图| 精品国内片67194| 国v精品久久久网| 亚洲三级电影全部在线观看高清| 色综合久久中文综合久久97| 亚洲成av人**亚洲成av**| 日韩一区二区三区视频在线 | 日韩欧美精品在线| 国产一区二区三区免费播放| 日本一区二区动态图| 日本韩国一区二区| 蜜桃一区二区三区在线| 欧美国产一区二区在线观看| 色先锋资源久久综合| 日韩av二区在线播放| 日本一二三不卡| 欧美日韩免费观看一区三区| 免费成人在线播放| 国产精品欧美久久久久一区二区| 日本一区二区三区四区在线视频| 99免费精品在线| 日韩精品国产欧美| 国产精品日韩成人| 欧美精品在线视频| 成人午夜电影久久影院| 亚洲成人午夜影院| 欧美国产1区2区| 日韩一区二区三区免费观看| 成人av电影在线观看| 日韩精品电影在线| 中文字幕日韩精品一区| 日韩欧美一区在线观看| 91麻豆高清视频| 国产综合色产在线精品| 亚洲午夜电影在线观看| 欧美激情一区二区在线| 欧美一区二区福利在线| 色综合久久99| 国产成人亚洲综合色影视| 亚洲高清在线精品| 国产精品的网站| 精品国产一区久久| 欧美人成免费网站| 97超碰欧美中文字幕|