tensorboard visualization of tensorflow2.0

This article mainly introduces the operation of tensorbboard visualization under tensorflow2.0. From three small aspects to introduce.

  1. stay keras.fit Call tensorbboard (the following code runs in jupyter notebooks)
    We use mnist handwritten digit recognition data set to compile and train the network
#Import module first
import tensorflow as tf
import datetime #The datetime module provides classes for processing date and time, introducing
#    The datetime module is mainly for visual differentiation. If we train the network several times, each time is different,
#    After the introduction of this module, the results of each training can be recorded in the tensorboard visual interface, which is convenient for comparison
#Data loading and processing
(x_train,y_train),(x_test,y_test)=tf.keras.datasets.mnist.load_data()
x_train=tf.expand_dims(x_train,-1)
x_test =tf.expand_dims(x_test,-1)
x_train=tf.cast(x_train/255,tf.float32)
x_test=tf.cast(x_test/255,tf.float32)
y_train=tf.cast(y_train,tf.int64)
y_test=tf.cast(y_test,tf.int64)
db_train=tf.data.Dataset.from_tensor_slices((x_train,y_train))
db_test=tf.data.Dataset.from_tensor_slices((x_test,y_test))
db_train=db_train.repeat().shuffle(60000).batch(128)
db_test=db_test.repeat().batch(128)
#Create a simple model
model=tf.keras.Sequential([    
         tf.keras.layers.Conv2D(64,[3,3],activation='relu',input_shape=[None,None,1]),
         tf.keras.layers.Conv2D(128,[3,3],activation='relu'),    
         tf.keras.layers.Conv2D(256,[3,3],activation='relu'),
         tf.keras.layers.GlobalMaxPooling2D(),
         tf.keras.layers.Dense(10,activation='softmax')
         ])
#Model compilation
model.compile(optimizer='adam',
             loss='sparse_categorical_crossentropy',
             metrics=['accuracy'])

#The following method is directly in jupyter notebooks
#Write path
import os
log_dir=os.path.join('logs',datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))#Files with time mark will be generated in tensorboard visual interface
tensorboard_callback=tf.keras.callbacks.TensorBoard(log_dir,histogram_freq=1)#Callback tensorflow's own tensorboard
model.fit(db_train,         
                  epochs=10,         
                  steps_per_epoch=60000//128,
                  validation_data=db_test,
                  validation_steps=10000//128,
                  callbacks=[tensorboard_callback]
                           )
                           

Training log: training completed
Next, open tensorboard directly in jupyter notebooks

%load_ext tensorboard  #This line of code loads tensorbboard
%matplotlib inline    #Online visualization
%tensorboard --logdir logs   #Load the generated logs file

After running the program, you can directly open the tensorboard interface in jupyter notebooks

SCALARS represents some scalar GRAPHS, such as accuracy rate, loss rate, etc., and GRAPHS is the structural graph of the model

Distribution is the distribution and change of kernel and bias

Histogram is the direct distribution diagram of bias and kernel (in the diagram, we can see the results of training at different time points)

2. tensorflow visualization of custom variables
For example, we need to readjust the regression model and record the custom learning rate. First we use the create file writer tf.summary.creat_file_writer() defines the custom learning rate function and passes it to keraslearingrateschedule for callback. Used within the learning rate function tf.summary.scalar() record the custom learning rate, and finally pass the LearingRateSchaduler callback to model.fit()
Here is the specific code (the same as above is omitted)

import tensorflow as tf
...
import os
log_dir=os.path.join('logs',datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback=tf.keras.callbacks.TensorBoard(log_dir,histogram_freq=1)
#Write files to disk with file writer
file_writer=tf.summary.creat_file_writer(log_dir+'/lr')#Define the file storage address. For example, this path is in the log_ In the lr folder under dirxia
#file_writer is set as the default file writer
file_writer.set__as_default()#When we use tf.summary.creat_ file_ When writer writes a file_writer.set__as_default() will write the file to log by default_ In dir + '/ LR' file

#Next, we define a function of learning rate following the change of epoch
def lr_sche(epoch):	
    learning_rate=0.2#If at first we need a higher learning rate, then as the training goes on, we will gradually reduce the learning rate
    if epoch >5:
    	learning_rate=0.02
    if epoch >10:
        learning_rate=0.01
    if epoch >20:
        learning_rate=0.005
    #Record changes in learning rates
    tf.summary.scalar('learning_rate',data=learning_rate,step=epoch)#Three parameters, the first definition name, the second data to be passed, and the third step

    return learning_rate
#The following functions call back the learning rate
lr_callback=tf.keras.callbacks.LearningRateScheduler(lr_sche)


model.fit(db_train,         
                  epochs=25,         
                  steps_per_epoch=60000//128,
                  validation_data=db_test,
                  validation_steps=10000//128,
                  callbacks=[tensorboard_callback,lr_callback]
                           )
#Then call tensorboard to visualize it.
%load_ext tensorboard  #This line of code loads tensorbboard
%matplotlib inline    #Online visualization
%tensorboard --logdir logs

Then we can observe our customized learning rate

3. tensorboard visualization of custom training
We want to visualize the accuracy or loss during training and testing.
The basic process of customized training is the same as above, first data preprocessing, then network modeling, and then no longer model.compile() and model.fit(), but to customize. The code is as follows:

import tensorflow as tf
...
model=tf.keras.Sequential([    
         tf.keras.layers.Conv2D(64,[3,3],activation='relu',input_shape=[None,None,1]),
         tf.keras.layers.Conv2D(128,[3,3],activation='relu'),    
         tf.keras.layers.Conv2D(256,[3,3],activation='relu'),
         tf.keras.layers.GlobalMaxPooling2D(),
         tf.keras.layers.Dense(10,activation='softmax')
         ])
#Next customize
optimizer=tf.keras.optimizers.Adam()
loss_func=tf.keras.losses.SparseCategoricalCrossentropy()

def loss(model,x,y):
    y_ =model(x)#The model here is the network model we created
    return loss_func(y,y_)
    
train_loss=tf.keras.metrics.Mean('train_loss')
train_accuracy=tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')
train_loss=tf.keras.metrics.Mean('train_loss')
train_accuracy=tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')     

def train_step(model,images,labels):
    with tf.GradientTape() as t:
        pred=model(images)
        loss_step=loss_func(labels,pred)
    grads=t.gradient(loss_step,model.trainable_variables)
    optimizers.apply_gradients(zip(grads,model.trainable_variables))
    train_loss=(loss_step)
    train_accuracy(labels,pred)
def test_step(model,images,labels):
    pred=model(images)
    loss_step=loss_func(labels,pred)
    test_loss(loss_step)
    test_accuracy(labels,pred)

At this point, it's like we've finished it model.compile() and model.fit()
Next, use the editor tf.summary.creat_file_writer() write path file

current_time=datetime.datetime.now().strftime("%Y%m%d-%H%M%S")#Record current time
#Create directories separately
train_log_dir='logs/gradient_tape'+current_time+'/train'#Represent training directory
test_log_dir='logs/gradient_tape'+current_time+'/test'
train_writer=tf.summary.creat_file_writer(train_log_dir)
test_writer=tf.summary.creat_file_writer(test_log_dir)

Next, write the training process to disk

def train():
    for epoch in range(10):
        for (batch,(images,labels)) in enumerate(dataset):
            train_step(model,images,labels)
        with train_writer.as_default():#Use with to associate up and down, train_writer as default writer
            tf.summary.scalar('loss',train_loss.result(),step=epoch)
            tf.summary.scalar('acc',train_accuracy.result(),step=epoch)
        
        for (batch,(images,labels)) in enumerate(db_test):
            test_step(model,images,labels)
        with test_writer.as_default():
            tf.summary.scalar('loss',test_loss.result(),step=epoch)
            tf.summary.scalar('acc',test_accuracy.result(),step=epoch)
  
    template='Epoch {},Loss: {},Accuracy: {},Test Loss {},Test Accuracy: {}'
    print(template.format(epoch+1,
    			 train_loss.result(),
   			 train_accuracy.result()*100,
    			 test_loss.result(),
    			 test_accuracy.result()*100))
    train_loss.rest_states()
    train_accuracy.rest_states()
    test_loss.rest_states()
    test_accuracy.rest_states()

Last run main function

if __name__ =='__main':
    main()

Finally, call tensorboard to visualize it.

Tags: jupyter network

Posted on Tue, 09 Jun 2020 01:23:04 -0400 by maxelcat