Practical exercise based on basic neural network - simple neural network construction

Reference books 95 - neural network and deep learning - Q...

Reference books

95 - neural network and deep learning - Qiu Xipeng

98 - in depth study of dynamics - the latest version in September

10, Introduction to artificial neural network, Xi. Training deep neural network, XII. Distributed TensorFlow

  Perceptron, the input number is connected with the weight to form the sum, and then the step function is used to process the sum
Training perceptron is training its weight
The figure above shows the simplest single LTU as a simple linear binary classification

Q1: bias neurons?
Q2: in LTU, when two neurons have the same output, the weight of the connection between them will increase - do not strengthen the connection leading to wrong output?

Q4: what does it mean that the decision boundary of each output neuron is linear?

Q10: why iris.data[:,(2,3)]
Q11: where do you need xx.astype(np.int)

  X is a tuple similar to (2,3) to load the length and width of petals

  Q3: what is this formula?

https://mp.weixin.qq.com/s/My-G5-tw4iOU8jwaOsBPUA
Adjust the super parameters of the perceptron

Q5: what is he doing?
Q6: what is the SGD classifier? What is penalty
Q7: the perceptron does not output class probability, but predicts based on hard threshold?
Q8: XOR XOR classification problem, linear classification problem
Q9: MLP solves the XOR problem by stacking multiple perceptron LTU s

  Reserve a time slot: the comparison, connection and difference between several classifiers, and the way to verify their advantages and disadvantages

  A method for training multi-layer LTU MLP -- back propagation training algorithm

  Training MLP with TensorFlow advanced API

  [set feature columns]
[load feature column into DNNClassifier]
[adjust with fit]

  Q1: what does DNN do
Q2: learn to practice: run this code on MNIST dataset and use sklearn StandardScaler

[subcontracting]
[Numpy package for data analysis, Perceptron linear fitting? Iris package - Download iris data]

[format type for downloading and adjusting data]
[define how to get the corresponding X and Y from iris]

[[routine of modifying fitting prediction]]
[use Perceptron,fit,predict]

Distinguish it from the above. Whether to use high-level API interface or low-level API interface depends on whether you want to make more bottom-level changes to the architecture of the neural network or direct control based on convenience

  Degraded code that can't be used can be understood. It's OK to know the inner logic. You don't have to spend time reciting it

  Turning, the front code can understand on the line, the back code does not need to recite, just recite the last paragraph directly

  Read the problem record of the code

Q3: truncated_normal

Q2: with xxx: ?

  Q1: how to use this StandardScaler?
https://www.cnblogs.com/lvdongjie/p/11349701.html
Several preprocessing operations were introduced quite clearly

Python preprocessing sklearn.preprocessing
Normalized MinMaxScaler
Standardization - StadnardScaler
Regularizer Normalizer

00 - warehouse adjustment
0-tensorflow,mnist,accuracy_score, numpy library, data retrieval, t framework, measurement of accuracy

0 - set the number of neurons in each layer of the neural network, such as input layer, output layer and hidden layer

0-partition training set and test set

0 - set up network level

0 - set loss function and learning rate

0 - set up optimizer

0 - calculation accuracy

##Library adjustment #tensorflow, precision, array matrix library from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf from sklearn.metrics import accuracy_score import numpy as np #Number of neurons in each layer if _name_ == '_main_': n_inputs = 28*28 n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 #Fetch input mnist dataset mnist = input_data.read_data_sets("/tmp/data/") #Divide training set and test set x_train = mnist.train.images x_test = mnist.test.images y_train = mnist.train.labels.astype("int") y_test = mnist.test.labels.astype("int") x = tf.placeholder(tf.float32, shape=(None, n_inputs), name='x') y = tf.placeholder(tf.int64, shape=(None), name='y') with tf.name_scope('dnn'): hidden1 = tf.layers.dense(x, n_hidden1, activation=tf.nn.relu, name='hidden1') hidden2 = tf.layers.dense(hidden1, n_hidden2, name='hidden2', activation=tf.nn.relu) logits = tf.layers.dense(hidden2, n_outputs, name='outputs') with tf.name_scope('loss'): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name='loss') learning_rate = 0.01 #train with tf.name_scope('train'): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) #eval? with tf.name_scope('eval'): correct = tf.nn.in_top_k(logits, y, 1) #Whether it is consistent with the true value and returns a Boolean value accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() n_epochs = 20 batch_size = 50 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): x_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict=) acc_train = accuracy.eval(feed_dict=) acc_test = accuracy.eval(feed_dict=) print(epoch,"Train accuracy:", acc_train, "Test accuracy:", acc_test)

  SyntaxError: invalid syntax problem occurred
There is no attribute in the corresponding display, which may be Import as   from   There is a mistake in import. The versions and syntax are different

  You don't quite understand what's in the Array []

  Common array creation methods, compared with List()

One dimensional array won't work. Try it again with two-dimensional array
There's something wrong with the two-dimensional array

Square brackets outside square brackets

IndentationError: unindent does not match any outer indentation level

https://www.sogou.com/link?url=hedJjaC291MPna5SxlQUxvo1ussxymppzvrb88k-uwZPQbKAG378IQ..

It should be a matter of space characters and system symbols

There seems to be more spaces in this place

Various source codes are no longer suitable for the problem, and the version is old

Problems with different versions can also be solved by migrating files

https://www.cnblogs.com/lvdongjie/p/11349701.html

  Normalized MinMaxScaler

#Data preprocessing normalization #Assumed data list to be preprocessed #preprocessing.MinMaxScaler(), the normalization method is boxed, and then fit is called_ transform() #Output modified value import numpy as np from sklearn import preprocessing XXX=np.array([[1,5],[2,3]]) min_max_scaler=preprocessing.MinMaxScaler() XXX_minmax=min_max_scaler.fit_transform(XXX) print(XXX_minmax)

Standardized StandardScaler

#Transfer packages: transfer numpy and preprocessing packages #Dummy pending array #Standardized treatment #Enter the mean and variance of the array to play #Transfer packages: transfer numpy and preprocessing packages from sklearn import preprocessing import numpy as np #Dummy pending array x = np.array([[1,2],[2,3]]) #Standardized treatment x_scaled = preprocessing.scale(x) #Enter the mean and variance of the array to play print(x_scaled) print(x_scaled.mean) print(x_scaled.std)

Regularizer Normalizer

from sklearn import preprocessing import numpy as np x = np.array([[1,2],[3,4]]) x_normalized = preprocessing.normalize(x, norm='12') print(x_normalized)

The three ways of preprocessing are well written Python data preprocessing (sklearn.preprocessing) - minmaxscaler, standardscaler, normalizer, normalize - avatarx - blog      ​​​​​​     

4 December 2021, 00:52 | Views: 2975

Add new comment

For adding a comment, please log in
or create account

0 comments