runs Kalman-Bucy filter over observations matrix Z
for 1-step prediction onto matrix X (X can = Z)
with model order p
V = initial covariance of observation sequence noise
returns model parameter estimation sequence A,
sequence of predicted outcomes y_pred
and error matrix Ey (reshaped) for y and Ea for a
along with inovation prob P = P(y_t | D_t-1) = evidence
This talk centered on Hamming s observations and research on the question"Why do so few scientists make significant contributions and so many are forgotten in the long run?"
The subroutines glkern.f and lokern.f use an efficient and fast algorithm for
automatically adaptive nonparametric regression estimation with a kernel method.
Roughly speaking, the method performs a local averaging of the observations when
estimating the regression function. Analogously, one can estimate derivatives of
small order of the regression function.
In this paper, we consider the problem of filtering in relational
hidden Markov models. We present a compact representation for
such models and an associated logical particle filtering algorithm. Each
particle contains a logical formula that describes a set of states. The
algorithm updates the formulae as new observations are received. Since
a single particle tracks many states, this filter can be more accurate
than a traditional particle filter in high dimensional state spaces, as we
demonstrate in experiments.
The algorm of viterbi. You talk to your friend three days in a row and discover that on the first day he went for a walk, on the second day he went shopping, and on the third day he cleaned his apartment. You have two questions: What is the overall probability of this sequence of observations? And what is the most likely sequence of rainy/sunny days that would explain these observations? The first question is answered by the forward algorithm the second question is answered by the Viterbi algorithm. These two algorithms are structurally so similar (in fact, they are both instances of the same abstract algorithm) that they can be implemented in a single function:
% EM algorithm for k multidimensional Gaussian mixture estimation
%
% Inputs:
% X(n,d) - input data, n=number of observations, d=dimension of variable
% k - maximum number of Gaussian components allowed
% ltol - percentage of the log likelihood difference between 2 iterations ([] for none)
% maxiter - maximum number of iteration allowed ([] for none)
% pflag - 1 for plotting GM for 1D or 2D cases only, 0 otherwise ([] for none)
% Init - structure of initial W, M, V: Init.W, Init.M, Init.V ([] for none)
%
% Ouputs:
% W(1,k) - estimated weights of GM
% M(d,k) - estimated mean vectors of GM
% V(d,d,k) - estimated covariance matrices of GM
% L - log likelihood of estimates
%
he algorithm is equivalent to Infomax by Bell and Sejnowski 1995 [1] using a maximum likelihood formulation. No noise is assumed and the number of observations must equal the number of sources. The BFGS method [2] is used for optimization.
The number of independent components are calculated using Bayes Information Criterion [3] (BIC), with PCA for dimension reduction.
This paper addresses the subject of SQL Injection in a Microsoft SQL Server/IIS/Active
Server Pages environment, but most of the techniques discussed have equivalents in other
database environments. It should be viewed as a "follow up", or perhaps an appendix, to
the previous paper, "Advanced SQL Injection".
The paper covers in more detail some of the points described in its predecessor, providing
examples to clarify areas where the previous paper was perhaps unclear. An effective
method for privilege escalation is described that makes use of the openrowset function to
scan a network. A novel method for extracting information in the absence of helpful
error messages is described the use of time delays as a transmission channel. Finally, a
number of miscellaneous observations and useful hints are provided, collated from
responses to the original paper, and various conversations around the subject of SQL
injection in a SQL Server environment.