How the K-mean Cluster work
Step 1. Begin with a decision the value of k = number of clusters
Step 2. Put any initial partition that classifies the data into k clusters. You may assign the training samples randomly, or systematically as the following:
Take the first k training sample as single-element clusters
Assign each of the remaining (N-k) training sample to the cluster with the nearest centroid. After each assignment, recomputed the centroid of the gaining cluster.
Step 3 . Take each sample in sequence and compute its distance from the centroid of each of the clusters. If a sample is not currently in the cluster with the closest centroid, switch this sample to that cluster and update the centroid of the cluster gaining the new sample and the cluster losing the sample.
Step 4 . Repeat step 3 until convergence is achieved, that is until a pass through the training sample causes no new assignments.
This program simulates plant identification using frequency block least mean square (FBLMS) alogrithm
reference: 《LMS算法的頻域快速實現(xiàn)》 LMS is modified by XXX in XXX place, see details in XXX relevant document
Generate 100 samples of a zero-mean white noise sequence with variance , by using a uniform random number generator.
a Compute the autocorrelation of for .
b Compute the periodogram estimate and plot it.
c Generate 10 different realizations of , and compute the corresponding sample autocorrelation sequences , and . Compute the average autocorrelation sequence as and the corresponding periodogram for .
d Compute and plot the average periodogram using the Bartlett method.
e Comment on the results in parts (a) through (d).