?? hedit.py
字號(hào):
# -----------------------------------------------------------------------------# hedit.py## Paring of Fortran H Edit descriptions (Contributed by Pearu Peterson)## These tokens can't be easily tokenized because they are of the following# form:## nHc1...cn## where n is a positive integer and c1 ... cn are characters.## This example shows how to modify the state of the lexer to parse# such tokens# -----------------------------------------------------------------------------tokens = ( 'H_EDIT_DESCRIPTOR', )# Tokenst_ignore = " \t\n"def t_H_EDIT_DESCRIPTOR(t): r"\d+H.*" # This grabs all of the remaining text i = t.value.index('H') n = eval(t.value[:i]) # Adjust the tokenizing position t.lexer.lexpos -= len(t.value) - (i+1+n) t.value = t.value[i+1:i+1+n] return t def t_error(t): print "Illegal character '%s'" % t.value[0] t.skip(1) # Build the lexerimport lexlex.lex()lex.runmain()
?? 快捷鍵說(shuō)明
復(fù)制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號(hào)
Ctrl + =
減小字號(hào)
Ctrl + -