亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? tply.tex

?? 編譯原理基礎
?? TEX
?? 第 1 頁 / 共 5 頁
字號:
      \verb"["$s$\verb"]"& any character in $s$& \verb"[abc]"\\
      \verb"[^"$s$\verb"]"& any character not in $s$& \verb"[^abc]"\\
      $r$\verb"*"& zero or more $r$'s& \verb"a*"\\
      $r$\verb"+"& one or more $r$'s& \verb"a+"\\
      $r$\verb"?"& zero or one $r$& \verb"a?"\\
      $r$\verb"{"$m$\verb","$n$\verb"}"& $m$ to $n$ occurrences of $r$& \verb"a{1,5}"\\
      $r$\verb"{"$m$\verb"}"& $m$ occurrences of $r$& \verb"a{5}"\\
      $r_1r_2$& $r_1$ then $r_2$& \verb"ab"\\
      $r_1$\verb"|"$r_2$& $r_1$ or $r_2$& \verb"a|b"\\
      \verb"("$r$\verb")"& $r$& \verb"(a|b)"\\
      $r_1$\verb"/"$r_2$& $r_1$ when followed by $r_2$& \verb"a/b"\\
      \verb"<"$x$\verb">"$r$& $r$ when in start condition $x$& \verb"<x>abc"\\
      \hline
   \end{tabular}
   \caption{Regular expressions.}
   \label{tab1}
\end{table*}

The operators \verb"*", \verb"+", \verb"?" and \verb"{}" have highest
precedence, followed by concatenation. The \verb"|" operator has lowest
precedence. Parentheses \verb"()" may be used to group expressions and
overwrite default precedences. The \verb"<>" and \verb"/" operators may only
occur once in an expression.

The usual C-like escapes are recognized:
\begin{itemize}
   \item \verb"\n"     denotes newline
   \item \verb"\r"     denotes carriage return
   \item \verb"\t"     denotes tab
   \item \verb"\b"     denotes backspace
   \item \verb"\f"     denotes form feed
   \item \verb"\"$nnn$ denotes character no.\ $nnn$ in octal base
\end{itemize}

You can also use the \verb"\" character to quote characters which would
otherwise be interpreted as operator symbols. In character classes, you may
use the \verb"-" character to denote ranges of characters. For instance,
\verb"[a-z]" denotes the class of all lowercase letters.

The expressions in a TP Lex program may be ambigious, i.e. there may be inputs
which match more than one rule. In such a case, the lexical analyzer prefers
the longest match and, if it still has the choice between different rules,
it picks the first of these. If no rule matches, the lexical analyzer
executes a default action which consists of copying the input character
to the output unchanged. Thus, if the purpose of a lexical analyzer is
to translate some parts of the input, and leave the rest unchanged, you
only have to specify the patterns which have to be treated specially. If,
however, the lexical analyzer has to absorb its whole input, you will have
to provide rules that match everything. E.g., you might use the rules
\begin{quote}\begin{verbatim}
   .   |
   \n  ;
\end{verbatim}\end{quote}
which match ``any other character'' (and ignore it).

Sometimes certain patterns have to be analyzed differently depending on some
amount of context in which the pattern appears. In such a case the \verb"/"
operator is useful. For instance, the expression \verb"a/b" matches \verb"a",
but only if followed by \verb"b". Note that the \verb"b" does not belong to
the match; rather, the lexical analyzer, when matching an \verb"a", will look
ahead in the input to see whether it is followed by a \verb"b", before it
declares that it has matched an \verb"a". Such lookahead may be arbitrarily
complex (up to the size of the \verb"LexLib" input buffer). E.g., the pattern
\verb"a/.*b" matches an \verb"a" which is followed by a \verb"b" somewhere on
the same input line. TP Lex also has a means to specify left context which is
described in the next section.

\subsection*{Start Conditions}

TP Lex provides some features which make it possible to handle left context.
The \verb"^" character at the beginning of a regular expression may be used
to denote the beginning of the line. More distant left context can be described
conveniently by using start conditions on rules.

Any rule which is prefixed with the \verb"<>" construct is only valid if the
lexical analyzer is in the denoted start state. For instance, the expression
\verb"<x>a" can only be matched if the lexical analyzer is in start state
\verb"x". You can have multiple start states in a rule; e.g., \verb"<x,y>a"
can be matched in start states \verb"x" or \verb"y".

Start states have to be declared in the definitions section by means of
one or more start state definitions (see above). The lexical analyzer enters
a start state through a call to the \verb"LexLib" routine \verb"start". E.g.,
you may write:

\begin{quote}\begin{verbatim}
%start x y
%%
<x>a    start(y);
<y>b    start(x);
%%
begin
  start(x); if yylex=0 then ;
end.
\end{verbatim}\end{quote}

Upon initialization, the lexical analyzer is put into state \verb"x". It then
proceeds in state \verb"x" until it matches an \verb"a" which puts it into
state \verb"y". In state \verb"y" it may match a \verb"b" which puts it into
state \verb"x" again, etc.

Start conditions are useful when certain constructs have to be analyzed
differently depending on some left context (such as a special character
at the beginning of the line), and if multiple lexical analyzers have to
work in concert. If a rule is not prefixed with a start condition, it is
valid in all user-defined start states, as well as in the lexical analyzer's
default start state.

\subsection*{Lex Library}

The TP Lex library (\verb"LexLib") unit provides various variables and
routines which are used by Lex-generated lexical analyzers and application
programs. It provides the input and output streams and other internal data
structures used by the lexical analyzer routine, and supplies some variables
and utility routines which may be used by actions and application programs.
Refer to the file \verb"LEXLIB.PAS" for a closer description.

You can also modify the Lex library unit (and/or the code template in the
\verb"YYLEX.COD" file) to customize TP Lex to your target applications. E.g.,
you might wish to optimize the code of the lexical analyzer for some
special application, make the analyzer read from/write to memory instead
of files, etc.

\subsection*{Implementation Restrictions And Bugs}

Internal table sizes and the main memory available limit the complexity
of source grammars that TP Lex can handle. There is currently no possibility
to change internal table sizes (apart from modifying the sources of TP Lex
itself), but the maximum table sizes provided by TP Lex seem to be large
enough to handle most realistic applications. The current limits are
600 p (positions), 300 s (states) and 600 t (transitions).

As implemented, the generated DFA table is stored as a typed array constant
which is inserted into the \verb"YYLEX.COD" code template. The transitions in
each state are stored in order. Of course it would have been more efficient to
generate a big \verb"CASE" statement instead, but I found that this may cause
problems with the encoding of large DFA tables because Turbo Pascal has
a quite rigid limit on the code size of individual procedures. I decided to
use a scheme in which transitions on different symbols to the same state are
merged into one single transition (specifying a character set and the
corresponding next state). This keeps the number of transitions in each state
quite small and still allows a fairly efficient access to the transition
table.

The TP Lex program has an option (\verb"/o") to optimize DFA tables. This
causes a minimal DFA to be generated, using the algorithm described in Aho,
Sethi, Ullman (1986). Although the absolute limit on the number of DFA states
that TP Lex can handle is 300, TP Lex poses an additional restriction (100) on
the number of states in the initial partition of the DFA optimization
algorithm. Thus, you may get a fatal \verb"integer set overflow" message when
using the \verb"/o" option even when TP Lex is able to generate an unoptimized
DFA. In such cases you will just have to be content with the unoptimized DFA.
(Anyhow, using the merged transitions scheme described above, TP Lex usually
constructs unoptimized DFA's which are not far from being optimal, and thus
in most cases DFA optimization won't have a great impact on DFA table sizes.)

\subsection*{Differences from UNIX Lex}

Major differences between TP Lex and UNIX Lex are listed below.

\begin{itemize}
   \item
      TP Lex produces output code for Turbo Pascal, rather than for C.
   \item
      Character tables (\verb"%T") are not supported; neither are any
      directives to determine internal table sizes (\verb"%p", \verb"%n",
      etc.).
   \item
      Library routines are named differently from the UNIX version (e.g.,
      the \verb"start" routine takes the place of the \verb"BEGIN" macro of
      UNIX Lex), and, of course, all macros of UNIX Lex (\verb"ECHO",
      \verb"REJECT", etc.) had to be implemented as procedures.
\end{itemize}

\section{TP Yacc}

This section describes the TP Yacc compiler compiler.

\subsection*{Usage}

\begin{quote}\begin{verbatim}
YACC [options] yacc-file[.Y]
  [output-file[.PAS]]
\end{verbatim}\end{quote}

\subsection*{Options}

\begin{description}
   \item[\verb"/v"]
      ``Verbose:'' TP Yacc generates a readable description of the generated
      parser, written to \verb"yacc-file" with new extension \verb".LST".
   \item[\verb"/d"]
      ``Debug:'' TP Yacc generates parser with debugging output.
\end{description}

\subsection*{Description}

TP Yacc is a program that lets you prepare parsers from the description
of input languages by BNF-like grammars. You simply specify the grammar
for your target language, augmented with the Turbo Pascal code necessary
to process the syntactic constructs, and TP Yacc translates your grammar
into the Turbo Pascal code for a corresponding parser subroutine named
\verb"yyparse".

TP Yacc parses the source grammar contained in \verb"yacc-file" (with default
suffix \verb".Y") and writes the constructed parser subroutine to the
specified \verb"output-file" (with default suffix \verb".PAS"); if no output
file is specified, output goes to \verb"yacc-file" with new suffix
\verb".PAS". If any errors are found during compilation, error messages are
written to the list file (\verb"yacc-file" with new suffix \verb".LST").

The generated parser routine, \verb"yyparse", is declared as:

\begin{quote}\begin{verbatim}
   function yyparse : Integer;
\end{verbatim}\end{quote}

This routine may be called by your main program to execute the parser.
The return value of the \verb"yyparse" routine denotes success or failure of
the parser (possible return values: 0 = success, 1 = unrecoverable syntax
error or parse stack overflow).

The code template for the \verb"yyparse" routine may be found in the
\verb"YYPARSE.COD" file. This file is needed by TP Yacc when it constructs
the output file. It must be present either in the current directory or in the
directory from which TP Yacc was executed (TP Yacc searches these directories
in the indicated order).

The TP Yacc library (\verb"YaccLib") unit is required by programs using Yacc-
generated parsers; you will therefore have to put an appropriate \verb"uses"
clause into your program or unit that contains the parser routine. The
\verb"YaccLib" unit also provides some routines which may be used to control
the actions of the parser. See the file \verb"YACCLIB.PAS" for further
information.

\subsection*{Yacc Source}

A TP Yacc program consists of three sections separated with the \verb"%%"
delimiter:

\begin{quote}\begin{verbatim}
definitions
%%
rules
%%
auxiliary procedures
\end{verbatim}\end{quote}

The TP Yacc language is free-format: whitespace (blanks, tabs and newlines)
is ignored, except if it serves as a delimiter. Comments have the C-like
format \verb"/* ... */". They are treated as whitespace. Grammar symbols are
denoted by identifiers which have the usual form (letter, including
underscore, followed by a sequence of letters and digits; upper- and
lowercase is distinct). The TP Yacc language also has some keywords which
always start with the \verb"%" character. Literals are denoted by characters
enclosed in single quotes. The usual C-like escapes are recognized:

\begin{itemize}
   \item \verb"\n"     denotes newline
   \item \verb"\r"     denotes carriage return
   \item \verb"\t"     denotes tab
   \item \verb"\b"     denotes backspace
   \item \verb"\f"     denotes form feed
   \item \verb"\"$nnn$ denotes character no.\ $nnn$ in octal base
\end{itemize}

\subsection*{Definitions}

The first section of a TP Yacc grammar serves to define the symbols used in
the grammar. It may contain the following types of definitions:

\begin{itemize}
   \item
      start symbol definition: A definition of the form
      \begin{quote}\begin{verbatim}
   %start symbol
      \end{verbatim}\end{quote}
      declares the start nonterminal of the grammar (if this definition is
      omitted, TP Yacc assumes the left-hand side nonterminal of the first
      grammar rule as the start symbol of the grammar).
   \item
      terminal definitions: Definitions of the form
      \begin{quote}\begin{verbatim}
   %token symbol ...
      \end{verbatim}\end{quote}
      are used to declare the terminal symbols (``tokens'') of the target
      language. Any identifier not introduced in a \verb"%token" definition
      will be treated as a nonterminal symbol.
    
      As far as TP Yacc is concerned, tokens are atomic symbols which do not
      have an innert structure. A lexical analyzer must be provided which
      takes on the task of tokenizing the input stream and return the
      individual tokens and literals to the parser (see Section {\em Lexical
      Analysis\/}).
   \item
      precedence definitions: Operator symbols (terminals) may be associated
      with a precedence by means of a precedence definition which may have
      one of the following forms
      \begin{quote}\begin{verbatim}
   %left symbol ...
   %right symbol ...
   %nonassoc symbol ...
      \end{verbatim}\end{quote}
      which are used to declare left-, right- and nonassociative operators,
      respectively. Each precedence definition introduces a new precedence
      level, lowest precedence first. E.g., you may write:
      \begin{quote}\begin{verbatim}
   %nonassoc '<' '>' '=' GEQ LEQ NEQ
      /* relational operators */
   %left     '+' '-'  OR
      /* addition operators */
   %left     '*' '/' AND
     /* multiplication operators */
   %right    NOT UMINUS
     /* unary operators */
      \end{verbatim}\end{quote}

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产女人aaa级久久久级| 久久99精品一区二区三区| 国产成a人无v码亚洲福利| 亚洲一二三四在线观看| 91尤物视频在线观看| 国产精品久久久久久户外露出| 成人自拍视频在线| 亚洲欧美日韩人成在线播放| 一本久道中文字幕精品亚洲嫩| 亚洲天堂网中文字| 欧美艳星brazzers| 美腿丝袜亚洲三区| 欧美精品一区二区三区很污很色的| 久久66热偷产精品| 欧美国产一区二区| 欧美在线你懂得| 蜜桃久久av一区| 久久久久久毛片| 91亚洲精华国产精华精华液| 亚洲午夜电影在线观看| 精品美女被调教视频大全网站| 国产精品一色哟哟哟| 综合激情成人伊人| 欧美精三区欧美精三区| 国产精品一区二区三区99| 日韩伦理av电影| 538prom精品视频线放| 国产精品综合在线视频| 亚洲精品国产a| 久久综合资源网| 色综合久久久久久久| 久久疯狂做爰流白浆xx| 国产精品第一页第二页第三页| 欧美日韩一区二区三区不卡| 国产精品99久久久久久久女警| 亚洲综合免费观看高清在线观看| 欧美成人bangbros| 一本在线高清不卡dvd| 极品少妇一区二区| 亚洲精品伦理在线| 26uuu国产日韩综合| 成人一级黄色片| 亚洲视频在线一区观看| 奇米精品一区二区三区在线观看一 | 麻豆精品一区二区三区| 狠狠色丁香久久婷婷综合_中| 国产精品大尺度| 欧美白人最猛性xxxxx69交| 色噜噜狠狠色综合欧洲selulu| 另类中文字幕网| 亚洲一区自拍偷拍| 中文字幕av一区二区三区免费看| 91精品黄色片免费大全| 99精品国产视频| 国产成人免费视频网站| 欧美a级一区二区| 亚洲一区二区三区在线看| 国产欧美综合色| 精品福利二区三区| 夜夜嗨av一区二区三区| 欧美影视一区二区三区| 青娱乐精品视频| 亚洲一区视频在线观看视频| 欧美国产日韩在线观看| 久久综合久久综合久久综合| 欧美一区二区三区在线视频 | 粉嫩一区二区三区性色av| 亚洲国产色一区| 国产亚洲人成网站| 日韩欧美国产综合| 日韩一区二区在线观看| 欧美日韩一区二区三区四区五区 | 国产乱码精品1区2区3区| 免费观看成人av| 天堂久久一区二区三区| 亚洲午夜在线视频| 亚洲在线视频网站| 亚洲一二三四久久| 亚洲永久精品大片| 亚洲一区二区三区视频在线播放| 国产精品国产三级国产普通话99| 欧美韩国一区二区| 国产精品久久久久久久久久久免费看| 国产亚洲午夜高清国产拍精品| 久久先锋影音av鲁色资源网| 欧美刺激脚交jootjob| 欧美成人综合网站| 国产亚洲人成网站| 国产精品电影院| 一区二区三区中文在线| 亚洲综合丝袜美腿| 免费成人在线影院| 国内精品第一页| 成人网在线播放| 92精品国产成人观看免费| 一本色道久久加勒比精品| 在线免费观看视频一区| 欧美日韩国产经典色站一区二区三区| 欧美色图一区二区三区| 7777精品伊人久久久大香线蕉最新版| 欧美巨大另类极品videosbest | 精品国产免费视频| 久久久久久99精品| 国产精品第一页第二页第三页| 亚洲欧美激情插 | 在线一区二区视频| 欧美一区二区三区四区高清| 日韩一区二区电影在线| 久久久国产精华| 亚洲乱码日产精品bd| 视频一区免费在线观看| 国产一区二区在线观看视频| 播五月开心婷婷综合| 在线一区二区三区四区五区| 欧美一区二区成人| 国产精品久久久久久久久动漫| 亚洲一区二区综合| 理论片日本一区| 91丨九色丨蝌蚪丨老版| 91精品免费观看| 国产精品久久精品日日| 日韩国产欧美在线播放| 成人免费毛片嘿嘿连载视频| 欧美日韩久久久一区| 国产亚洲va综合人人澡精品 | 国产高清精品在线| 91高清在线观看| 精品国产污网站| 亚洲免费观看高清完整版在线 | 午夜视频在线观看一区二区 | 一区二区三区国产精品| 老司机精品视频在线| 91在线精品一区二区| 精品成人免费观看| 亚洲精品少妇30p| 国产成人无遮挡在线视频| 51久久夜色精品国产麻豆| 综合久久一区二区三区| 卡一卡二国产精品| 欧美日韩性生活| 亚洲日本青草视频在线怡红院 | 成人福利视频在线看| 日韩一区二区影院| 国产精品毛片大码女人| 欧美国产精品中文字幕| 99久久精品国产毛片| 久久久综合网站| 国产日韩精品一区| 亚洲色图在线播放| 国产一区二区三区免费| 91精品国产综合久久香蕉麻豆| 亚洲欧洲成人精品av97| 国产在线视频精品一区| 欧美日韩精品二区第二页| 亚洲精品免费在线| 91日韩精品一区| 一区在线中文字幕| 国产91精品一区二区麻豆网站 | 国产精品丝袜在线| 香蕉久久一区二区不卡无毒影院| www.成人在线| 国产欧美一区二区三区网站| 精品一区二区三区不卡| 欧美一级高清大全免费观看| 亚洲大尺度视频在线观看| 91浏览器在线视频| 国产精品久久久久aaaa樱花| 成人18精品视频| 日韩写真欧美这视频| 国产欧美1区2区3区| 制服.丝袜.亚洲.中文.综合| 国产成人免费在线视频| 国产精品传媒视频| 9色porny自拍视频一区二区| 国产精品丝袜久久久久久app| 国产毛片精品视频| 久久香蕉国产线看观看99| 国产呦精品一区二区三区网站| 久久这里只有精品视频网| 国产中文字幕精品| 国产女同性恋一区二区| 91丨porny丨国产入口| 亚洲欧洲日韩av| 欧美三级蜜桃2在线观看| 日韩成人一区二区三区在线观看| 欧美片网站yy| 国产原创一区二区| 中文一区二区完整视频在线观看| jizzjizzjizz欧美| 夜夜爽夜夜爽精品视频| 91精品国产综合久久福利| 久久国产剧场电影| 国产免费成人在线视频| 99九九99九九九视频精品| 亚洲一区二区在线视频| 国产一本一道久久香蕉| 激情综合亚洲精品| 日韩不卡一二三区| 91视频观看视频| 色噜噜狠狠成人网p站|