How the K-mean Cluster work
Step 1. Begin with a decision the value of k = number of clusters
Step 2. Put any initial partition that classifies the data into k clusters. You may assign the training samples randomly, or systematically as the following:
Take the first k training sample as single-element clusters
Assign each of the remaining (N-k) training sample to the cluster with the nearest centroid. After each assignment, recomputed the centroid of the gaining cluster.
Step 3 . Take each sample in sequence and compute its distance from the centroid of each of the clusters. If a sample is not currently in the cluster with the closest centroid, switch this sample to that cluster and update the centroid of the cluster gaining the new sample and the cluster losing the sample.
Step 4 . Repeat step 3 until convergence is achieved, that is until a pass through the training sample causes no new assignments.
Computes BER v EbNo curve for convolutional encoding / soft decision
Viterbi decoding scheme assuming BPSK.
Brute force Monte Carlo approach is unsatisfactory (takes too long)
to find the BER curve.
The computation uses a quasi-analytic (QA) technique that relies on the
estimation (approximate one) of the information-bits Weight Enumerating
Function (WEF) using
A simulation of the convolutional encoder. Once the WEF is estimated, the analytic formula for the BER is used.
The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Process : finite horizon, value iteration, policy iteration, linear programming algorithms with some variants.
The functions (m-functions) were developped with MATLAB v6.0 (one of the functions requires the Mathworks Optimization Toolbox) by the decision team of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).
The version 2.0 (February 2005) handles sparse matrices and contains an example
A Web Tutorial on Discrete Features of Bayes Decision Theory
This applet allows for the calculation of the decision boundary given a three dimensional feature vector. Specifically, by stipulating the variables such as the priors, and the conditional likelihoods of each feature with respect to each class, the changing decision boundary will be displayed.