Activations of freezed layers are different between before/after training, why?

조회 수: 2 (최근 30일)
I follow the example "transfer-learning-using-googlenet" where, the last 3 layers ('loss3-classifier','prob','output') are replaced with 3 new ones. Then I 'freeze' the first 141 layers (that is up to and including 'pool5-drop_7x7_s1'):
layers(1:141) = freezeWeights(layers(1:141));
lgraph = createLgraphUsingConnections(layers,connections);
Then I follow fine-tuning.
Since 'pool5-7x7_s1' is BEFORE 'pool5-drop_7x7_s1', I would expect that the following two vectors were the same:
b_orig= activations(net_orig, I, 'pool5-7x7_s1');
b_tune= activations(net_tune, I, 'pool5-7x7_s1');
but they aren't!... Any idea why?
p.s. I also tried the activation of several other layers BEFORE 'pool5-drop_7x7_s1', and I got different vectors.... 'I' is an image, 'net_orig=googlenet;', and 'net_tune' is the resulting net after tuning.
  댓글 수: 2
conngame
conngame 2018년 7월 15일
I have the same problem using alexnet. Any explanations to this question?
ntinoson
ntinoson 2018년 7월 17일
I did also try other CNNs, and same result. Activation of freezed layers after fine-tuning is different to that before fine-tuning (for the same input image of course). If anyone comes up with an explanation, drop a line!

댓글을 달려면 로그인하십시오.

채택된 답변

Amanjit Dulai
Amanjit Dulai 2018년 8월 14일
The vectors are different because when you fine tune on a new dataset, the average image in "imageInputLayer" is recalculated for your new dataset.
  댓글 수: 2
ntinoson
ntinoson 2018년 8월 27일
ok, i see, thanks!
p.s. if 'normalization' was set to 'none' (i.e. NO data transformation applied, I guess the vectors would be the same, right?)

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by