Difficulty utilizing pretrained "pix2Pix" GAN implementation using Deep Network Designer
조회 수: 23 (최근 30일)
이전 댓글 표시
Hi there
I am attempting to implement a pretrained "Pix2Pix" GAN that was trained using tensorflow/Keras example:
after (tensorflow) training I saved the model for the Generator to the required model.h5 file.
In Matlab:
import the layers and weights
lgraph=importKerasLayers(modelfile,'ImportWeights',true,"OutputLayerType","regression")
have also attempted the above without the regression layer but the following still occurs:
Using the Deep Network Designer I import lgraph from the workspace and then export with pretrained parameters - unless I am mistaken the weights and biases should be as was trained in Tensorflow?
Exectute the live scripts that Deep Network Designer generated to create the Lgraph with Weights,Biases etc
however once I create the generator from the lgraph:
Generator=assembleNetwork(lgraph)
and then pass an input image (correctly sized, normalized etc) through the generator
Y=predict(Generator,dlPic)
the output appears to return an unchanged input image?
Am I missing any important aspects?
Is it necessary to define the discriminator, loss functions etc if I only wish to utilize the pretrained Generator architecture?
Any assistance would be GREATLY appreciated!
댓글 수: 0
답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!