Bimonthly data generation and its common algorithms

Algorithm part:

Data usage: Bimonthly data generation and its common algorithms (I)_ Anti hole hamster blog - CSDN blog   Data in the article

(1) Least squares:

Theoretical part:

The optimal parameters are obtained by matrix solution. Specifically, solving B is the best solution we need. X is the corresponding data and Y is the corresponding output.  

Code implementation:

Where trainData is the corresponding data and target is the corresponding label.

#Least square algorithm
    def MIN2X(self, trainData,target):
        self.w = np.array((trainData.T*trainData).I*trainData.T*target[:,2:])
        print("MIN2X The updated weight is:{}".format(self.w))

(2) ML: maximum likelihood estimation

Theoretical part:

Maximum likelihood estimation is a statistical method often involved in probability theory. Maximum likelihood estimation will look for information about θ The most likely value, that is, in all possible θ Value, find a value to maximize the "possibility" of this sampling.

Now, our goal is the maximum likelihood function:.

  Maximizing the log likelihood function is actually minimizing the square sum error function to find the maximum likelihood solution.

According to the formula, the form of the optimal solution is:

W(ML)  = Rxx.I(N)*Rdx(N)

Where Rxx is the autocorrelation matrix and Rdx is the cross-correlation matrix.

Rxx(N) = x*x.T            Rdx(N) = x*d 

By bringing in X and the corresponding response d, the ML optimal parameter solution of this part of data can be obtained.

Code part:

#ML algorithm
    def ML(self, TrainData, target):
        Rxx = TrainData.T * TrainData
        Rdx = TrainData.T * target[:, 2:]
        self.w = np.array(Rxx.I*Rdx)
        print("ML The updated weight is:{}".format(self.w))

(3) MAP: maximum a posteriori estimation

Theory part: the difference of maximum likelihood estimation is that the maximum a posteriori estimation seeks the value that can maximize the a posteriori probability. Usually, the posterior distribution of parameters is obtained by combining the prior distribution of parameters with sample information.

Essentially different from ML estimation, MAP estimation is biased estimation and ml is unbiased estimation.

Code part:

  #MAP algorithm
    def MAP(self,TrainData,target):
        Rxx = TrainData.T*TrainData
        Rdx = TrainData.T*target[:,2:]
        self.w = np.array((Rxx+self.lamba*np.eye(len(TrainData.T))).I*Rdx)
        print("MAP The updated weight is:{}".format(self.w))

(4) SLP: single layer perceptron

Theory part: through the idea of iteration, the parameters are optimized to solve a set of optimal solutions with small error.

Code part:

      def SLP(self,trainData,target):
        ERROR = []
        allNumbel = len(trainData)
        for i in range(self.epoch):
            errorNumbel = 0
            for x, y in zip(trainData, target):

                if y[2] != sign(np.dot(self.w.T, x)):
                    errorNumbel = errorNumbel + 1
                    self.w = self.w + self.lr * sign(y[2] - np.dot(self.w.T, x))*x

            ERROR.append(errorNumbel/allNumbel)

        print("LMS The updated weight is:{}".format(self.w))
        return ERROR

(5) LMS:

Theory part: through continuous iteration, find a set of weight vectors to minimize the error between prediction and expectation. Then continuously optimize the parameters.

E(w∗)≤E(∀w)

The general method is to iterate continuously so that e (w (I)) < e (w (I − 1)), and so on until the cost function is small enough.

Code part:

 # Least square algorithm
    def LMS(self, trainData, target):
        ERROR = []
        allNumbel = len(trainData)
        for i in range(self.epoch):
            errorNumbel = 0
            for x,y in zip(trainData,target):
                x = np.squeeze(x.tolist())
                y = np.squeeze(y.tolist())

                if y[2] != sign(np.dot(self.w.T, x)):
                    errorNumbel = errorNumbel + 1

                err = y[2] - x.T * self.w
                self.w = self.w + self.lr * x * err
            ERROR.append(errorNumbel/allNumbel)

        print("LMS The updated weight is:{}".format(self.w))
        return ERROR

result:

 

 

 

  The above is to discuss the type, classification and decision plane only when the linetype can be divided, and the completion code is put in the following link

Bimonthly data and its classification algorithm. zip - machine learning document resources - CSDN Download

 

Tags: Python Algorithm Machine Learning

Posted on Wed, 01 Dec 2021 19:40:51 -0500 by ozfred