One of the most important issues affecting
the implementation of microcontroller
software deals with the data-decision
algorithm. Data-decision refers to decoding
the DIO-pin from the CC400/CC900. Two
main principles exist for decoding
Manchester-coded data: Data decision
based on timing the period between
transitions, and data decision based on
oversampling.
LVQ學習矢量化算法源程序
This directory contains code implementing the Learning vector quantization
network. Source code may be found in LVQ.CPP. Sample training data is found
in LVQ1.PAT. Sample test data is found in LVQTEST1.TST and LVQTEST2.TST. The
LVQ program accepts input consisting of vectors and calculates the LVQ
network weights. If a test set is specified, the winning neuron (class) for
each neuron is identified and the Euclidean distance between the pattern and
each neuron is reported. Output is directed to the screen.
bayeserr - Computes the Bayesian risk for optimal classifier.
% bayescln - Classifier based on Bayes decision rule for Gaussians.
% bayesnd - Discrim. function, dichotomy, max aposteriori probability.
% bhattach - Bhattacharya s upper limit of mean class. error.
% pbayescln - Plots discriminat function of Bayes classifier.
The source code for this package is located in src/gov/nist/sip/proxy. The proxy
is a pure JAIN-SIP application: it does not need proprietary nist-sip
classes in addition of those defined in JAIN-SIP 1.1, you can substitute
the NIST-SIP stack by another JAIN-SIP-1.1 compliant stack and it should
interoperate.
he proxy can act as presence server and be able to process NOTIFY and
SUBSCRIBE requests. If this parameter is disabled, the proxy will simply
forward those kind of requests following the appropriate routing decision.
ITU-T G.729語音壓縮算法。
description:
Fixed-point description of commendation G.729 with ANNEX B Coding of Speech at 8 kbit/s using Conjugate-Structure Algebraic-Code-Excited Linear-Prediction (CS-ACELP) with Voice Activity Decision(VAD), Discontinuous Transmission(DTX), and Comfort Noise Generation(CNG).
Hidden_Markov_model_for_automatic_speech_recognition
This code implements in C++ a basic left-right hidden Markov model
and corresponding Baum-Welch (ML) training algorithm. It is meant as
an example of the HMM algorithms described by L.Rabiner (1) and
others. Serious students are directed to the sources listed below for
a theoretical description of the algorithm. KF Lee (2) offers an
especially good tutorial of how to build a speech recognition system
using hidden Markov models.
This directory contains code implementing the K-means algorithm. Source code
may be found in KMEANS.CPP. Sample data isfound in KM2.DAT. The KMEANS
program accepts input consisting of vectors and calculates the given
number of cluster centers using the K-means algorithm. Output is
directed to the screen.
Boosting is a meta-learning approach that aims at combining an ensemble of weak classifiers to form a strong classifier. Adaptive Boosting (Adaboost) implements this idea as a greedy search for a linear combination of classifiers by overweighting the examples that are misclassified by each classifier. icsiboost implements Adaboost over stumps (one-level decision trees) on discrete and continuous attributes (words and real values). See http://en.wikipedia.org/wiki/AdaBoost and the papers by Y. Freund and R. Schapire for more details [1]. This approach is one of most efficient and simple to combine continuous and nominal values. Our implementation is aimed at allowing training from millions of examples by hundreds of features in a reasonable time/memory.
Adaptive Filter. This script shows the BER performance of several types of equalizers in a static channel with a null in the passband. The script constructs and implements a linear equalizer object and a decision feedback equalizer (DFE) object. It also initializes and invokes a maximum likelihood sequence estimation (MLSE) equalizer. The MLSE equalizer is first invoked with perfect channel knowledge, then with a straightforward but imperfect channel estimation technique.