Using NasNet model and keras to do deep learning training is wrong

When using NasNet model and keras for deep learning training, the following code is used I. code chip: inputs = Input((224, 224, 3)) base_model = NAS...

When using NasNet model and keras for deep learning training, the following code is used

I. code chip:
inputs = Input((224, 224, 3))
base_model = NASNetMobile(include_top=False, input_shape=(224, 224, 3)) # , weights=None
x = base_model(inputs)

In the beginning, they always reported such mistakes:

ValueError: padding must be zero for average_exc_pad Apply node that
caused the error: AveragePoolGrad(Elemwise}.0, IncSubtensor.0,
TensorConstant{(2,) of 2}, TensorConstant{(2,) of 2},
TensorConstant{(2,) of 1}) Toposort index: 137 Inputs types:
[TensorType(float32, 4D), TensorType(float32, 4D), TensorType(int32,
vector), TensorType(int32, vector), TensorType(int32, vector)] Inputs
shapes: [(32, 32, 64, 64), (32, 32, 33, 33), (2,), (2,), (2,)] Inputs
strides: [(524288, 16384, 256, 4), (139392, 4356, 132, 4), (4,), (4,),
(4,)] Inputs values: ['not shown', 'not shown', array([2, 2]),
array([2, 2]), array([1, 1])] Outputs clients:
[[InplaceDimShuffle(AveragePoolGrad.0)]] Backtrace when the node is
created(use Theano flag traceback.limit=N to make it longer): File
"C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 1272, in access_grad_cache
term = access_term_cache(node)[idx] File "C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 967, in access_term_cache
output_grads = [access_grad_cache(var) for var in node.outputs] File
"C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 967, in
output_grads = [access_grad_cache(var) for var in node.outputs] File
"C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 1272, in access_grad_cache
term = access_term_cache(node)[idx] File "C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 967, in access_term_cache
output_grads = [access_grad_cache(var) for var in node.outputs] File
"C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 967, in
output_grads = [access_grad_cache(var) for var in node.outputs] File
"C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 1272, in access_grad_cache
term = access_term_cache(node)[idx] File "C:\Users\aiza\Anaconda3\envs\py2\lib\site-packages\theano\gradient.py",
line 1108, in access_term_cache
new_output_grads) HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint
of this apply node.

II. After the following format is changed:
base_model = NASNetMobile(include_top=False, weights='imagenet')
x = base_model.output
Previous errors disappear, but new ones appear:

It is suggested that the number of layers of weight does not match the number of layers of the model when using pre training weight training.

3. There are two solutions:
1. Do not use pre training weight first, and use random weight weight to initialize the model:
base_model = NASNetLarge(weights=None, include_top=False)

2. Use the load weights function to load the pre training weight:
Base? Model. Load? Weights ("your weight path", by? Name = true)

You can use pre training weights

4 December 2019, 13:54 | Views: 7344

Add new comment

For adding a comment, please log in
or create account

0 comments