?? usages.doc
字號:
----------------------------------------------------------------------------
NAME
feature - extract feature vectors from a speech signal
SYNOPSIS
feature [options] speech_sample
DESCRIPTION
This program calculate MFCC cepstral coefficients and their
derivative from a given signal. Three different feature sets or their
combinations can be specified. An automatic starting/end point
detection function is built-in which can remove the silent parts at
the beginning and end of the signal.
OPTIONS
-o feature output file [stdout]
-b [512] starting sample
-w [256] analysis window size
-s [128] window moving step
-r [10] MFCC Order (before -M, G, I options)
-T [1] Class membership
-S *not* swap byte order, (SUN Big endian; PC, VAX little endian)
-E Endpoint detection [off]
-M output MFCC [off]
-G output Delta MFCC [off]
-I output Accu MFCC [off]
EXAMPLES
feature -o foo.pat -w 256 -s 128 -E -M -G -T 1 foo.wav
extract 10 MFCC and 10 Delta MFCC coefficients from each
frame of 256 samples. The class membership is "1" for this speech
token, vector patterns will be *appended* to the file "foo.pat".
----------------------------------------------------------------------------
NAME
gensym - transform feature vector sequence to discrete
observation symbols.
SYNOPSIS
gensym [options] patternfile codebook
DESCRIPTION
This program turns real vector sequences to discrete observation
symbols which are used for training and test discrete hidden
Markov models. Internally, it invoke the LBG algorithm to general
codebook if [-g] option is specified and find the nearest neighbor
for each input vector.
OPTIONS
-o symbol file [stdout]
-g generate codebook from the given data
-n [32] number of code vectors, 2, 4, 8, 16,...
-v verbose
EXAMPLES
gensym -v -g -n 16 eset_tr.pat eset_tr.cod > eset_tr.seq
gensym eset_te.pat eset_tr.cod > eset_te.seq
The first example generates a codebook from the training
patterns and encodes the training pattern, redirecting the sequence to
train.seq file. The second example uses the same codebook generated
from the training data and encodes the test patterns.
----------------------------------------------------------------------------
NAME
dtwcmp - comparing test patterns with all training patterns by
Dynamic time-wrapping (DTW) algorithm.
SYNOPSIS
dtwcmp [options] reference_pattern test_pattern
DESCRIPTION
This is the classical speech recognition method which align a test
speech token with each of reference patterns.
OPTIONS
-d distance file name [none]
-k [1] k nearest neighbor
-v verbose
EXAMPLES
dtwcmp -v -k 1 eset_tr.pat eset_te.pat
This command find the nearest training pattern for each test pattern
based on DTW alignment and gives recognition results. The training patterns
"eset_tr.pat" and test patterns "eset_te.pat" are generated with the "feature"
utility from speech data.
----------------------------------------------------------------------------
NAME
chmm - training program for continuous hidden Markov models with
Gaussian mixture density functions.
SYNOPSIS
chmm [options] configfile patternfile modelfile
DESCRIPTION
This is the state-of-the-art technique currently used for speech
recognition. Each class of speech is modeled by a HMM with
Gaussian mixture density functions and trained by the Baum-Welch
algorithm. The model structure can be specified in the "configfile",
but the most commonly used structure is the left-to-right model. In
the configfile, "1" means the transition from this state (row) to
another state (column) is allowed, otherwise, specify "0".
OPTIONS
-m [4] Number of Gaussian mixture components
-s [1996] Random seed
-i [0] Train specified model, 0 all models
-v verbose
EXAMPLES
chmm -v -v 8_state.cfg eset_tr.pat eset_tr.chm
This command generates HMMs for each class in the "eset_tr.pat"
file and save the model parameters in "eset_tr.chm" for later test. Note,
most commands can give more than one "-v" option so that more
messages will be dumped.
----------------------------------------------------------------------------
NAME
viterbi - test program for hidden Markov models with Gaussian
mixture density functions.
SYNOPSIS
viterbi [options] patternfile modelfile
DESCRIPTION
This is the companion program for the "chmm". It uses either
viterbi decoding or forward scores to classify each pattern in the
"patternfile".
OPTIONS
-o [-] Output probability
-p [-] Backtracking path
-t Viterbi score, [Default forward variable]
-v verbose
EXAMPLES
viterbi -v eset_te.pat eset_tr.chm
viterbi -p pathfile -t eset_te.pat eset_tr.chm
The first command uses forward scores to test each pattern in
"eset_te.pat" file and the second example use viterbi score. The align
paths are written in pathfile.
----------------------------------------------------------------------------
NAME
dhmm - training program for discrete hidden Markov models.
SYNOPSIS
dhmm [options] train_sequence modelfile
DESCRIPTION
This program trains left-to-right discrete observation HMMs
from given training sequences. The training sequences are generated
with "gensym" program.
OPTIONS
-s [6] : number of states
-n [32] : number of symbols (codebook size)
-i [0] : train single model, 0 all models
-S [9373] : random seed
-e [0.01] : stop training if prob. improving less than that
-v verbose
EXAMPLES
dhmm -v -v -n 16 eset_tr.seq eset_tr.dhm
This command will generate discrete HMM from the training
sequences. Note, the number of observation symbols should equal the
codebook size used for generating sequences.
----------------------------------------------------------------------------
NAME
dtest - test program for discrete hidden Markov models.
SYNOPSIS
dtest [options] test_sequence modelfile
DESCRIPTION
This is the companion program for the "dhmm". It uses forward
scores to classify each sequence in the "test_sequence".
OPTIONS
-i [none] Sequence ID (1:correct, 0:wrong)
-p [none] write out prob. to a file
-v verbose
EXAMPLES
dtest -i eset_te.id eset_te.seq eset_tr.dhm
The command gives the recognition result and tell you which
sequence is correct in "eset_te.id" file ("1" correct, "0" wrong).
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -