Mathematical modeling  Urban Smart Growth Model Evaluation System
problem
 Select two specific cities to evaluate the effectiveness of their growth plans
 Propose feasible plans to help our selected cities grow wisely
 Evaluate the success of the plan
terms of settlement
 On the basis of collecting a large number of urban smart growth data, 25 indicators are selected through principal component analysis. On this basis, a new threelevel index system is constructed, and the weight vector and grouping decisionmaking method are obtained.
 Establish a measurement system.
 Predict the changes of the city. Then the combined prediction model is used to minimize the prediction error.
 Two different population growth models (PGM) are used to predict population changes.
Main algorithms:
 Fuzzy evaluation: realize the quantification of qualitative indicators.
 Entropy weight method (EWM): obtain the weight vector of the index
Group decision making method (GDM): obtain the weight vector of the second type of index  Kmeans clustering algorithm: success criteria for obtaining smart growth model
 SVM vector machine and weighted moving average method (WMAM): predict urban change
 Population growth model (PGM): predicting population changes
Fuzzy evaluation
1. Related concepts of fuzzy evaluation
 Factor set (evaluation index set): U U U
 Comment set (result of evaluation): V V V
 Weight set (weight of indicator): A A A
2. Method steps (first level fuzzy evaluation problem)
 The factor set, comment set and weight set are determined respectively. In this paper, A is obtained by analytic hierarchy process (AHP)
 Determining fuzzy judgment synthesis matrix
 Comprehensive evaluation
P
=
A
∘
R
=
(
p
1
,
p
2
,
p
3
.
.
.
p
n
)
P=A\circ R=(p_{1},p_{2},p_{3}...p_{n})
P=A∘R=(p1,p2,p3...pn)
In this paper, Let F represent the score set, then the final formula is Z = P ⋅ F Z=P·F Z=P⋅F
Entropy weight method (EWM)
1. Forward processing: judge whether there are negative numbers in the input matrix. If so, re standardize to non negative interval. In this paper, normalize the data
 Standardized formula
2. Calculate the probability matrix with the standardized matrix:
The entropy value is obtained by using the formula:
3. Calculate the correlation coefficient value (S represents the standard index entropy obtained by clustering a large number of data)
4. Calculate the entropy weight
MATLAB
function [s,w]=shang(x,ind) %The entropy method is used to calculate each index(Column) and the score of each data row %x Is the original data matrix, A row represents a sample, Each column corresponds to an indicator %ind Indicator vector, indicating whether each column is a positive indicator or a negative indicator. 1 represents a positive indicator and 2 represents a negative indicator %s Return the score of each line (sample), w Returns the weight of each column [n,m]=size(x); % n Samples, m Indicators %%Normalization of data for i=1:m if ind(i)==1 %Positive index normalization X(:,i)=guiyi(x(:,i),1,0.002,0.996); %If normalized to[0,1], 0 There will be problems else %Negative index normalization X(:,i)=guiyi(x(:,i),2,0.002,0.996); end end %%Calculation section j Under the first indicator, the second i Proportion of samples in the index p(i,j) for i=1:n for j=1:m p(i,j)=X(i,j)/sum(X(:,j)); end end %%Calculation section j Entropy of two indexes e(j) k=1/log(n); for j=1:m e(j)=k*sum(p(:,j).*log(p(:,j))); end d=ones(1,m)e; %Calculate information entropy redundancy w=d./sum(d); %Weight calculation w s=100*w*X'; %Seek comprehensive score
KMeans clustering algorithm
In this paper, the data set is divided into three parts (k=3)
The sum of variance of the distance between the sample value and the centroid is obtained.
The goal of Kmeans clustering algorithm is to minimize the variance. (this is also the purpose of PCA dimensionality reduction previously)
Input sample
 First enter the sample set D = { x 1 , x 2 , . . . x m } D=\{x1,x2,...xm\} D={x1,x2,...xm}, clustering tree k, maximum number of iterations N
 Randomly select k samples from dataset D as the initial K mean vectors: { μ 1 , μ 2 , . . . , μ k } \{μ1,μ2,...,μk\} {μ1,μ2,...,μk}
Start iteration N times

Initialize cluster partition C to C t = ∅ t = 1 , 2... k Ct=∅t=1,2...k Ct = ∅ t=1,2...k (k clusters)

For i=1,2... m (sample size), calculate the sample xi and each mean vector μ Distance of j(j=1,2,... k) d i j = dij= dij=

Mark xi as the category corresponding to Dij λ i. Update at this time C λ i = C λ i ⋃ x i C_{λi}=C_{λi}\bigcup{x_{i}} Cλi=Cλi⋃xi

For j=1,2,..., k, recalculate the new centroid for all sample points in Cj μ j = 1 ∣ C j ∣ ∑ x ∈ C j x μ_{j}=\frac{1}{Cj}\sum_{x\in C_{j}} x μj=∣Cj∣1∑x∈Cjx

If all k mean vectors do not change, it ends
End, output cluster division C = { C 1 , C 2 , . . . C k } C=\{C1,C2,...Ck\} C={C1,C2,...Ck}
code
%%%come from csdn clc; clear; time=0; k= ; %%%cluster tree k In the paper, it is determined as 3 x=; %%%sample z=x(1:k,1:2); z1=zeros(k,2); while time<=1000 count=zeros(k,1);%%%How many points belong to each cluster center allsum=zeros(k,2);%%%The sum of the horizontal and vertical distances from each point belonging to the cluster center to the cluster center num=[];%%%%Record the number of the central point belonging to the cluster temp=[];%%%%Record the distance from each point to the cluster center and find the smallest one for i=1:size(x,1) for j=1:k temp(j,1)=sqrt((z(j,1)x(i,1)).^2+(z(j,2)x(i,2)).^2);%The first i Points to j Distance between cluster centers temp(j,2)=j; end temp=sortrows(temp,1);%about temp Sort in descending order of the first column c=temp(1,2);%%%Find the number of the cluster center with the minimum distance count(c)=count(c)+1;%%%The number of points belonging to this center plus 1 num(c,count(c))=i; allsum(c,1)=allsum(c,1)+x(i,1);%%%Find the sum of abscissa of all points belonging to the cluster center allsum(c,2)=allsum(c,2)+x(i,2);%%%Find the sum of the ordinates of all points belonging to the cluster center end z1(:,1)=allsum(:,1)./count(:);%%%Sum of abscissa of all points in each cluster center/The number of points belonging to the cluster center is the new abscissa z1(:,2)=allsum(:,2)./count(:); if (z==z1)%%%If the point no longer changes break; else z=z1; end time=time+1; end plot(x(:,1),x(:,2),'r*'); hold on; plot(z1(:,1),z1(:,2),'bo'); num(num==0)=NaN; for i=1:k if (count(i)==0)%%%This is to prevent some clustering centers from not matching points continue; else disp(['The first',num2str(i),'Class is:',num2str(num(i,:))]); end end
Weighted moving average method (WMAM)
Weighted moving average refers to multiplying individual data by different values when calculating the average. In technical analysis, the latest value of nday WMA is multiplied by N, the next nearest value is multiplied by n1, and so on until 0
[external chain picture transfer failed. The source station may have antitheft chain mechanism. It is recommended to save the picture and upload it directly
 In this thesis,
λ
i
\lambda_{i}
λ i ， is statistical data
x
t
−
i
x_{ti}
Weight of xt − i +
matlab code is as follows
y= ; %%%% Row vector w= ; %%% Column vector m=length(y); n= ;%%%from w determine for i=1:mn+1 yhat(i)=y(i:i+n1)*w;end yhat %%% Multiply row by column vector err=abs(y(n+1:m)yhat(1:end1))./y(n+1:m) %%%relative error T_err=1sum(yhat(1:end1))/sum(y(n+1:m)) %%%Relative error ratio y=yhat(end)/(1T_err)
Population growth model (PGM)
The model of population growth in Ningguo City is
1
p
(
t
)
=
1
P
0
−
b
l
n
t
\frac{1}{p(t)}=\frac{1}{P_{0}}blnt
p(t)1=P01−blnt
Parameter b is based on the population in recent years.