1, Experimental purpose

Through the study of this experiment, students can understand or master the method of designing nonlinear discriminant function by using the idea of potential function in pattern recognition, and can realize pattern classification. Learn to use the learned pilot courses, such as data structure and algorithm design knowledge, and select the appropriate data structure to complete the algorithm design and program implementation. The nonlinear discriminant function is established through the training data, the classification prediction is carried out by replacing the samples to be classified, and the correctness of the classifier is tested by checking the prediction results and the geometric distribution characteristics of the data. By selecting this classification method for classifier design experiment, students can strengthen their understanding and application of nonlinear classifier, so as to firmly grasp the content knowledge of pattern recognition course.

2, Basic idea of potential function method

Suppose you want to divide into two categories ω one ω 1 and ω two ω 2, which can be regarded as points xkxk distributed in nn dimensional model space.

Belong to ω one ω The point of 1 is compared to a certain energy point, at which the potential reaches the peak.

With the increase of the distance from the point, the potential distribution decreases rapidly, that is, the potential distribution on the space xx point near the sample xkxk is regarded as a potential function K(x,xk)K(x,xk).

For belonging to ω one ω A "highland" will be formed in the nearby space of the sample cluster of 1, and the location of these sample points is "hilltop".

Similarly, the geometric distribution of potential is used to treat ω two ω 2, a "depression" is formed in the space near it.

As long as the appropriate contour line is selected between the two types of potential distribution, it can be regarded as the discriminant function of pattern classification.

3, Generation of discriminant function

The discriminant function of pattern classification can be composed of many sample vectors {xk,k=1,2,... And, XK ∈ distributed in pattern space ω 1 ∪ w2}{xk,k=1,2,... And, XK ∈ ω 1 ∪ W2}.

The potential function generated by any sample is represented by K(x,xk)K(x,xk), then the discriminant function d(x)d(x) can be composed of potential function sequences K(x,x1),K(x,x2), * K(x,x1),K(x,x2), * which correspond to the training mode samples x1,x2, * x1,x2,... Input into the machine during the training process.

In the training state, the pattern samples are input into the classifier one by one, and the classifier continuously calculates the corresponding potential function. The accumulated potential in the kk iteration depends on the accumulation of all individual potential functions before this step.

The accumulation potential function is represented by K(x)K(x). If the added training sample xk+1xk+1 is a wrong classification, the accumulation function needs to be modified. If it is a correct classification, it remains unchanged.

3. Discriminant function generation and stepwise analysis

Let the initial potential function K0(x)=0K0(x)=0

Step 1: add the first training sample x1,

Then there

K1(x)={K(x,x1)−K(x,x1)ifx1∈ω1ifx1∈ω2K1(x)={K(x,x1)ifx1∈ω1−K(x,x1)ifx1∈ω2

Here, in the first step, the cumulative potential function K1(x)K1(x) describes the boundary division when the first sample is added. When the sample belongs to ω one ω 1, the potential function is positive; When the sample belongs to ω two ω 2, the potential function is negative.

Step 2: add the second training sample X2,

Then there

If x2 ∈ ω 1x2∈ ω 1 and K1 (x2) > 0k1 (x2) > 0, or x2 ∈ ω 2x2∈ ω 2 and K1 (x2) < 0k1 (x2) < 0, then the classification is correct. At this time, K2(x)=K1(x)K2(x)=K1(x), that is, the accumulation potential function remains unchanged.

If x2 ∈ ω 1x2∈ ω 1 and K1(x-2) < 0K1(x-2) < 0, then

K2(x)=K1(x)+K(x,x2)=±K(x,x1)+K(x,x2)K2(x)=K1(x)+K(x,x2)=±K(x,x1)+K(x,x2)

If x2 ∈ ω 2x2∈ ω 2 and K1 (x2) > 0k1 (x2) > 0, then

K2(x)=K1(x)−K(x,x2)=±K(x,x1)−K(x,x2)K2(x)=K1(x)−K(x,x2)=±K(x,x1)−K(x,x2)

The above two cases (ii) and (iii) belong to misclassification. If x2x2 is on the wrong side of the boundary defined by K1(x)K1(x), when x ∈ ω 1x∈ ω 1, the accumulated potential K2(x)K2(x) needs to add K(x,x2)K(x,x2), when x ∈ ω 2x∈ ω 2, the accumulated potential K2(x)K2(x) decreases K(x,x2)K(x,x2).

Step KK: let Kk(x)Kk(x) be the accumulated potential after adding training samples x1,x2,..., xkx1,x2,..., xk, then when adding the (k+1)(k+1) sample, Kk+1(x)Kk+1(x) is determined as follows:

\1. If XK + 1 ∈ ω 1xk+1∈ ω 1 and KK (XK + 1) > 0kk (XK + 1) > 0, or XK + 1 ∈ ω 2xk+1∈ ω 2 and KK (XK + 1) < 0kk (XK + 1) < 0, then the classification is correct. At this time, Kk+1(x)=Kk(x)Kk+1(x)=Kk(x), that is, the accumulation potential remains unchanged.

\2. If XK + 1 ∈ ω 1xk+1∈ ω 1 and KK (XK + 1) < 0kk (XK + 1) < 0, then Kk+1(x)=Kk(x)+K(x,xk+1)Kk+1(x)=Kk(x)+K(x,xk+1);

\3. If XK + 1 ∈ ω 2xk+1∈ ω 2 and KK (XK + 1) > 0kk (XK + 1) > 0, then Kk+1(x)=Kk(x) − K(x,xk+1)Kk+1(x)=Kk(x) − K(x,xk+1)

Therefore, the iterative operation of accumulated potential can be written as: Kk+1(x)=Kk(x)+rk+1K(x,xk+1)Kk+1(x)=Kk(x)+rk+1K(x,xk+1), rk+1rk+1 is the correction coefficient:

rk+1=⎧⎩⎨⎪⎪⎪⎪⎪⎪⎪⎪001−1ifxk+1∈ω1andKk(xk+1)>0ifxk+1∈ω2andKk(xk+1)<0ifxk+1∈ω1andKk(xk+1)<0ifxk+1∈ω2andKk(xk+1)>0rk+1={0ifxk+1∈ω1andKk(xk+1)>00ifxk+1∈ω2andKk(xk+1)<01ifxk+1∈ω1andKk(xk+1)<0−1ifxk+1∈ω2andKk(xk+1)>0

If the samples that do not change the accumulated potential are removed from the given training sample set x1,x2, *, xk, * x1,x2, *, xk, * even if kJ (XJ + 1) > 0kj (XJ + 1) > 0 and XJ + 1 ∈ ω 1xj+1∈ ω 1, or kJ (XJ + 1) < 0kj (XJ + 1) < 0 and XJ + 1 ∈ ω 2xj+1∈ ω 2, we can get a simplified sample sequence {x ⌢ 1,x ⌢ 2,..., X ⌢ j,...} {x ⌢ 1,x ⌢ 2,..., X ⌢ j,...}, which are completely samples for correcting errors. At this time, the above iterative formula can be summarized as follows:

Kk+1(x)=∑x⌢jajK(x,x⌢j)Kk+1(x)=∑x⌢jajK(x,x⌢j)

among

aj={+1−1forx⌢j∈ω1forx⌢j∈ω2aj={+1forx⌢j∈ω1−1forx⌢j∈ω2

That is, the cumulative potential generated by k+1k+1 training samples is equal to ω one ω Category 1 and ω two ω The difference between the total potential of the corrected error samples in class 2.

From the potential function, it can be seen that the cumulative potential plays the role of discriminant function: when xk+1xk+1 belongs to ω one ω 1, KK (XK + 1) > 0kk (XK + 1) > 0; When xk+1xk+1 belongs to ω two ω 2, KK () XK + 1 < 0kk () XK + 1 < 0, then the accumulated potential can be used as a discriminant function without any modification.

Because the misclassification of a pattern sample can cause the change of accumulated potential during training, the potential function algorithm provides a deterministic method ω one ω 1 and ω two ω 2 iterative process of two kinds of discriminant functions. Discriminant function expression: take d(x)=K(x)d(x)=K(x), then there are: dk+1(x)=dk(x)+rk+1K(x,xk+1)dk+1(x)=dk(x)+rk+1K(x,xk+1)

4. Two ways of forming potential function:

Potential function of the first kind and potential function of the second kind

Potential function of the first kind:

It can be expanded by symmetric finite polynomials, namely:

K(x,xk)=∑i=1mϕi(x)ϕi(xk)K(x,xk)=∑i=1mϕi(x)ϕi(xk)

Where {} is an orthogonal function set in the schema definition domain. Substituting this kind of potential function into the discriminant function includes:

dk+1(x)=dk(x)+rk+1∑i=1mϕi(xk+1)ϕi(x)=dk(x)+∑i=1mrk+1ϕi(xk+1)ϕi(x)dk+1(x)=dk(x)+rk+1∑i=1mϕi(xk+1)ϕi(x)=dk(x)+∑i=1mrk+1ϕi(xk+1)ϕi(x)

Iterative relationship:

dk+1(x)=∑i=1mCi(k+1)ϕi(x)dk+1(x)=∑i=1mCi(k+1)ϕi(x)

among

Ci(k+1)=Ci(k)+rk+1ϕi(xk+1)Ci(k+1)=Ci(k)+rk+1ϕi(xk+1)

Therefore, the accumulated potential can be written as:

Kk+1(x)=∑i=1mCi(k+1)ϕi(x)Kk+1(x)=∑i=1mCi(k+1)ϕi(x)

C i CiCi can be obtained by iterative formula.

Potential function of the second kind:

Select bivariate xx and x k X_ The symmetric function of KX K is taken as the potential function, that is, K (x, x, K) = K (x, K, x) K (x, x, K) = K (x, K, x) = K (x, K, x), and it can be expanded into infinite series, for example:

(a) K(x,xk)=e−α∥x−xk∥2K(x,xk)=e−α‖x−xk‖2

(b) K(x,xk)=11+ α ∥x−xk∥2K(x,xk)=11+ α‖ x−xk‖2, αα Is a normal number

© K(x,xk)=∣∣∣sinα∥x−xk∥2α∥x−xk∥2∣∣∣K(x,xk)=|sinα‖x−xk‖2α‖x−xk‖2|

4, Experimental content

It is assumed that the normal (class) and abnormal (class) data obtained from the examination of the three main indexes of the patient are as follows:

Class: (1, 2, 5), (1, 1, 2), (3, 3, 6);

Class: (5, 6, 10), (7, 6, 11), (8, 7, 12)

5, Experimental steps

1. Select the set potential function (choose one of the three bivariate symmetric basis functions; or make multiple choices to realize manual and automatic selection);

2. Determine the appropriate data structure to complete the correct representation of potential function and discriminant function respectively;

3. The training samples are trained and studied, and the discriminant function is established to meet the classification requirements

4. Record and output training rounds;

5. Use your classifier to judge the categories of all samples (classification decision), and compare the differences with the actual categories;

6. Judge the classified samples to obtain their category (prediction), and explain it with geometric distribution if possible;

7. Output the expression of your discriminant function (Note: the expression should be easy to read and understand).

6, Testing

1. First test the correctness of existing samples.

2. Classify with the data to be classified. Here, for samples: (2, 3, 5), (6, 7, 10)

Test them separately to check whether their geometric distribution is consistent with the results belonging to class and class respectively, so as to confirm that the designed classifier is correct.

7, Implementation tips

1) The samples are stored in the matrix s, and each row of S is a sample. In order to facilitate programming, the category number can be added to each sample as the last dimension;

2) In order to save and calculate the discriminant function, an auxiliary structure array ftbl can be used. Each component of the array contains two components: index and symbol. Index records the label under the corresponding sample, and symbol records the symbol of the item.

8, matlab code

```cpp % Design of nonlinear discriminator by potential function method n=6; % n Represents the total number of samples. here n=6,The first three samples belong to the first category, and the last three samples belong to the second category m=30; % Maximum number of terms of discriminant function d=3; % d Represents dimension length r=0; % r Represents the number of items in the discriminant function (each item is a basis function, including 3 coordinate components (dimensions=3)) tag=1; %Flag quantity for judging whether to continue the cycle g=0; % sample s=[ 1,2, 5,1 1,1, 2,1 3,3, 6,1 5,6,11,2 7,6,11,2 8,7,12,2]; % Column 4 indicates the category: 1 Indicates belonging to category 1 % 2 Indicates belonging to category 2 run=0; % run Is the round, and the initial value is set to 0 while tag==1 run=run+1; tag=0; for k=1:n % n Represents the total number of samples. if r==0 % r==0 Indicates that the discriminant function does not contain any term r=r+1; %r Point to the last term of the potential function obtained so far, and prepare to include the first term % ftbl Is an array of structures. Each component of the array contains index and symbol For two components, record the sample number and symbol respectively ftbl(r).symbol=1; % The symbol for this item. 1--Positive;-1--negative ftbl(r).index=1; % The corresponding sample label of this item continue; else g=0; % Change the current page k Samples are first substituted into the established partial discriminant function for calculation, and then judge whether the classification is correct for i=1:r % i An integer variable that scans each item temp=0; for j=1:d % d Indicates dimension length. Here, d Actually 3, i.e d=3 temp=temp+(s(k,j)-s(ftbl(i).index,j))*(s(k,j)-s(ftbl(i).index,j)); end g= g+ftbl(i).symbol*exp(-temp); %Each term is in the form of an index,Find common r Sum of items end if ((g>0 &s(k,4)==1)||(g<0&s(k,4)==2)) continue; %When the classification is correct, the discriminant function is not modified else % The current sample should form an item and be saved in the discrimination expression tag=1; r=r+1; ftbl(r).index=k; if(g>0& s(k,4)==2) ftbl(r).symbol=-1; else if(g<0&s(k,4)==1) ftbl(r).symbol=1; end end end end end end fprintf('Number of cycles= %d',run); fprintf('\n Expression of output discriminant function:\n'); % Output discriminant function,That is, output each item of the discriminant function. Through the output structure array ftbl Each component of for i=1:r % Output No i term if(ftbl(i).symbol==1) if i==1 fprintf('exp{-[(x1') else fprintf('+exp{-[(x1') end else fprintf('-exp{-[(x1'); end % Whether the first component of the sample is a positive sign or a negative sign determines the sign before the output component value if (s(ftbl(i).index,1)>0) % The first component of the sample is a positive sign fprintf('-') fprintf('%d',s(ftbl(i).index,1)) fprintf(')^2+(x2') else if(s(ftbl(i).index,1)<0) % The first component of the sample is a minus sign fprintf('+') fprintf('%d',-s(ftbl(i).index,1)) % Negative is positive fprintf(')^2+(x2'); else %s(ftbl(i).index,1)==0 fprintf(')^2+(x2'); end end if (s(ftbl(i).index,2)>0) fprintf('-') fprintf('%d',s(ftbl(i).index,2)) fprintf(')^2+(x3') else if(s(ftbl(i).index,2)<0) fprintf('+') fprintf('%d',-s(ftbl(i).index,2)) fprintf(')^2+(x3'); else fprintf(')^2+(x3') end end if (s(ftbl(i).index,3)>0) fprintf('-') fprintf('%d',s(ftbl(i).index,3)) fprintf(')^2]}'); else if(s(ftbl(i).index,3)<0) fprintf('+') fprintf('%d',-s(ftbl(i).index,3)) fprintf(')^2]}'); else fprintf(')^2]}') end end end fprintf('\n') % Identify the category of each sample: fprintf('Identify the category of each sample:\n'); for k=1:n; g=0; for i=1:r temp=0; for j=1:d %d Represents dimension length temp=temp+(s(k,j)-s(ftbl(i).index,j))*(s(k,j)-s(ftbl(i).index,j)); end g=g+ftbl(i).symbol*exp(-temp); %common r Each term is in the form of an index end if (g>0) fprintf('The first') fprintf('%d',k) fprintf('The category of samples is: ') fprintf('%d\n',1) else if (g<0) fprintf('The first') fprintf('%d',k) fprintf('The category of samples is: ') fprintf('%d\n',2) else %g==1 fprintf('The first') fprintf('%d',k) fprintf('The category of samples cannot be distinguished! ') fprintf('But the first') fprintf('%d',k) fprintf('The actual category of samples is: ') fprintf('%d\n',s(k,4));%Output actual category end end end % cout<<endl; %Judge the categories of (2, 3, 5) and (6, 7, 11) respectively: %Start with the first sample,I.e. (2, 3, 5) a=[2,3,5]; g=0; for i=1:r temp=0; for j=1:d %d Represents dimension length temp=temp+(a(j)-s(ftbl(i).index,j))*(a(j)-s(ftbl(i).index,j)); end g=g+ftbl(i).symbol*exp(-temp); %common r Each term is in the form of an index end if g>0 fprintf('sample a=(2，3，5)The category of is: ') fprintf('%d\n',1) else if (g<0) fprintf('sample a=(2，3，5)The category of is: ') fprintf('%d\n',2) else fprintf('sample a=(2，3，5)The category of cannot be distinguished!\n') end end %Now for the second sample,I.e. (6, 7, 11) b=[6,7,11]; g=0; for i=1:r temp=0; for j=1:d % d Represents dimension length temp=temp+(b(j)-s(ftbl(i).index,j))*(b(j)-s(ftbl(i).index,j)); end g=g+ftbl(i).symbol*exp(-temp); %common r Each term is in the form of an index end if g>0 fprintf('sample b=(6，7，11)The category of is: ') fprintf('%d\n',1) else if (g<0) fprintf('sample b=(6，7，11)The category of is: ') fprintf('%d\n',2) else fprintf('sample b=(6，7，11)The category of cannot be distinguished!\n') end end fprintf('\n') %%%% %%% function g=calfun(s,ftbl,r) % s Store samples; ftbl Storage sample number and symbol; r Is the number of items g=1; for i=1:r temp=1; for j=1:d % d Represents dimension length temp= temp+(s(k,j)-s(ftbl(i).index,j))*(s(k,j)-s(ftbl(i).index,j)); g= g+ftbl(i).symbol*exp(-temp); %common r Each term is in the form of an index end end end