Large WGAN-GP train loss












0















This is the loss function of WGAN-GP



gen_sample = model.generator(input_gen)
disc_real = model.discriminator(real_image, reuse=False)
disc_fake = model.discriminator(gen_sample, reuse=True)
disc_concat = tf.concat([disc_real, disc_fake], axis=0)
# Gradient penalty
alpha = tf.random_uniform(
shape=[BATCH_SIZE, 1, 1, 1],
minval=0.,
maxval=1.)
differences = gen_sample - real_image
interpolates = real_image + (alpha * differences)
gradients = tf.gradients(model.discriminator(interpolates, reuse=True), [interpolates])[0] # why [0]
slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
gradient_penalty = tf.reduce_mean((slopes-1.)**2)

d_loss_real = tf.reduce_mean(disc_real)
d_loss_fake = tf.reduce_mean(disc_fake)

disc_loss = -(d_loss_real - d_loss_fake) + LAMBDA * gradient_penalty
gen_loss = - d_loss_fake


This is the training loss



The generator loss is oscillating, and the value is so big.
My question is:
is the generator loss normal or abnormal?










share|improve this question



























    0















    This is the loss function of WGAN-GP



    gen_sample = model.generator(input_gen)
    disc_real = model.discriminator(real_image, reuse=False)
    disc_fake = model.discriminator(gen_sample, reuse=True)
    disc_concat = tf.concat([disc_real, disc_fake], axis=0)
    # Gradient penalty
    alpha = tf.random_uniform(
    shape=[BATCH_SIZE, 1, 1, 1],
    minval=0.,
    maxval=1.)
    differences = gen_sample - real_image
    interpolates = real_image + (alpha * differences)
    gradients = tf.gradients(model.discriminator(interpolates, reuse=True), [interpolates])[0] # why [0]
    slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
    gradient_penalty = tf.reduce_mean((slopes-1.)**2)

    d_loss_real = tf.reduce_mean(disc_real)
    d_loss_fake = tf.reduce_mean(disc_fake)

    disc_loss = -(d_loss_real - d_loss_fake) + LAMBDA * gradient_penalty
    gen_loss = - d_loss_fake


    This is the training loss



    The generator loss is oscillating, and the value is so big.
    My question is:
    is the generator loss normal or abnormal?










    share|improve this question

























      0












      0








      0








      This is the loss function of WGAN-GP



      gen_sample = model.generator(input_gen)
      disc_real = model.discriminator(real_image, reuse=False)
      disc_fake = model.discriminator(gen_sample, reuse=True)
      disc_concat = tf.concat([disc_real, disc_fake], axis=0)
      # Gradient penalty
      alpha = tf.random_uniform(
      shape=[BATCH_SIZE, 1, 1, 1],
      minval=0.,
      maxval=1.)
      differences = gen_sample - real_image
      interpolates = real_image + (alpha * differences)
      gradients = tf.gradients(model.discriminator(interpolates, reuse=True), [interpolates])[0] # why [0]
      slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
      gradient_penalty = tf.reduce_mean((slopes-1.)**2)

      d_loss_real = tf.reduce_mean(disc_real)
      d_loss_fake = tf.reduce_mean(disc_fake)

      disc_loss = -(d_loss_real - d_loss_fake) + LAMBDA * gradient_penalty
      gen_loss = - d_loss_fake


      This is the training loss



      The generator loss is oscillating, and the value is so big.
      My question is:
      is the generator loss normal or abnormal?










      share|improve this question














      This is the loss function of WGAN-GP



      gen_sample = model.generator(input_gen)
      disc_real = model.discriminator(real_image, reuse=False)
      disc_fake = model.discriminator(gen_sample, reuse=True)
      disc_concat = tf.concat([disc_real, disc_fake], axis=0)
      # Gradient penalty
      alpha = tf.random_uniform(
      shape=[BATCH_SIZE, 1, 1, 1],
      minval=0.,
      maxval=1.)
      differences = gen_sample - real_image
      interpolates = real_image + (alpha * differences)
      gradients = tf.gradients(model.discriminator(interpolates, reuse=True), [interpolates])[0] # why [0]
      slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
      gradient_penalty = tf.reduce_mean((slopes-1.)**2)

      d_loss_real = tf.reduce_mean(disc_real)
      d_loss_fake = tf.reduce_mean(disc_fake)

      disc_loss = -(d_loss_real - d_loss_fake) + LAMBDA * gradient_penalty
      gen_loss = - d_loss_fake


      This is the training loss



      The generator loss is oscillating, and the value is so big.
      My question is:
      is the generator loss normal or abnormal?







      python tensorflow deep-learning






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 21 '18 at 13:58









      WEN WENWEN WEN

      517




      517
























          1 Answer
          1






          active

          oldest

          votes


















          1














          One thing to note is that your gradient penalty calculation is wrong. The following line:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))


          should actually be:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1,2,3]))


          You are reducing on the first axis, but the gradient is based on an image as shown by the alpha values and therefore you have to reduce on the axis [1,2,3].



          Another error in your code is that the generator loss is:



          gen_loss = d_loss_real - d_loss_fake


          For the gradient calculation this makes no difference, due to the parameters of the generator only being contained in d_loss_fake. However, for the value of the generator loss this makes all the difference in the world and is the reason why this oszillates this much.



          At the end of the day you should look at your actual performance metric you care about to determine the quality of your GAN like the inception score or the Fréchet Inception Distance (FID), because the loss of discriminator and generator are only mildly descriptive.






          share|improve this answer


























          • Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

            – WEN WEN
            Nov 21 '18 at 14:51











          • This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

            – Thomas Pinetz
            Nov 21 '18 at 15:02











          • Yes, the output images looks reasonable. Thank you.

            – WEN WEN
            Nov 21 '18 at 15:16











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53413706%2flarge-wgan-gp-train-loss%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          One thing to note is that your gradient penalty calculation is wrong. The following line:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))


          should actually be:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1,2,3]))


          You are reducing on the first axis, but the gradient is based on an image as shown by the alpha values and therefore you have to reduce on the axis [1,2,3].



          Another error in your code is that the generator loss is:



          gen_loss = d_loss_real - d_loss_fake


          For the gradient calculation this makes no difference, due to the parameters of the generator only being contained in d_loss_fake. However, for the value of the generator loss this makes all the difference in the world and is the reason why this oszillates this much.



          At the end of the day you should look at your actual performance metric you care about to determine the quality of your GAN like the inception score or the Fréchet Inception Distance (FID), because the loss of discriminator and generator are only mildly descriptive.






          share|improve this answer


























          • Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

            – WEN WEN
            Nov 21 '18 at 14:51











          • This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

            – Thomas Pinetz
            Nov 21 '18 at 15:02











          • Yes, the output images looks reasonable. Thank you.

            – WEN WEN
            Nov 21 '18 at 15:16
















          1














          One thing to note is that your gradient penalty calculation is wrong. The following line:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))


          should actually be:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1,2,3]))


          You are reducing on the first axis, but the gradient is based on an image as shown by the alpha values and therefore you have to reduce on the axis [1,2,3].



          Another error in your code is that the generator loss is:



          gen_loss = d_loss_real - d_loss_fake


          For the gradient calculation this makes no difference, due to the parameters of the generator only being contained in d_loss_fake. However, for the value of the generator loss this makes all the difference in the world and is the reason why this oszillates this much.



          At the end of the day you should look at your actual performance metric you care about to determine the quality of your GAN like the inception score or the Fréchet Inception Distance (FID), because the loss of discriminator and generator are only mildly descriptive.






          share|improve this answer


























          • Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

            – WEN WEN
            Nov 21 '18 at 14:51











          • This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

            – Thomas Pinetz
            Nov 21 '18 at 15:02











          • Yes, the output images looks reasonable. Thank you.

            – WEN WEN
            Nov 21 '18 at 15:16














          1












          1








          1







          One thing to note is that your gradient penalty calculation is wrong. The following line:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))


          should actually be:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1,2,3]))


          You are reducing on the first axis, but the gradient is based on an image as shown by the alpha values and therefore you have to reduce on the axis [1,2,3].



          Another error in your code is that the generator loss is:



          gen_loss = d_loss_real - d_loss_fake


          For the gradient calculation this makes no difference, due to the parameters of the generator only being contained in d_loss_fake. However, for the value of the generator loss this makes all the difference in the world and is the reason why this oszillates this much.



          At the end of the day you should look at your actual performance metric you care about to determine the quality of your GAN like the inception score or the Fréchet Inception Distance (FID), because the loss of discriminator and generator are only mildly descriptive.






          share|improve this answer















          One thing to note is that your gradient penalty calculation is wrong. The following line:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))


          should actually be:



          slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1,2,3]))


          You are reducing on the first axis, but the gradient is based on an image as shown by the alpha values and therefore you have to reduce on the axis [1,2,3].



          Another error in your code is that the generator loss is:



          gen_loss = d_loss_real - d_loss_fake


          For the gradient calculation this makes no difference, due to the parameters of the generator only being contained in d_loss_fake. However, for the value of the generator loss this makes all the difference in the world and is the reason why this oszillates this much.



          At the end of the day you should look at your actual performance metric you care about to determine the quality of your GAN like the inception score or the Fréchet Inception Distance (FID), because the loss of discriminator and generator are only mildly descriptive.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 21 '18 at 14:07

























          answered Nov 21 '18 at 14:04









          Thomas PinetzThomas Pinetz

          4,28411328




          4,28411328













          • Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

            – WEN WEN
            Nov 21 '18 at 14:51











          • This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

            – Thomas Pinetz
            Nov 21 '18 at 15:02











          • Yes, the output images looks reasonable. Thank you.

            – WEN WEN
            Nov 21 '18 at 15:16



















          • Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

            – WEN WEN
            Nov 21 '18 at 14:51











          • This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

            – Thomas Pinetz
            Nov 21 '18 at 15:02











          • Yes, the output images looks reasonable. Thank you.

            – WEN WEN
            Nov 21 '18 at 15:16

















          Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

          – WEN WEN
          Nov 21 '18 at 14:51





          Thank you for your advice and answer. You save my days. After changing the code, the all the disc_loss are negative, and all the gen_loss are positive. Are they reasonable or not?

          – WEN WEN
          Nov 21 '18 at 14:51













          This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

          – Thomas Pinetz
          Nov 21 '18 at 15:02





          This is the desired behavior. GANs are notoriously fickle and can exhibit all kinds of behaviors, even if coded correctly so it is hard to tell. Are the output images reasonable?

          – Thomas Pinetz
          Nov 21 '18 at 15:02













          Yes, the output images looks reasonable. Thank you.

          – WEN WEN
          Nov 21 '18 at 15:16





          Yes, the output images looks reasonable. Thank you.

          – WEN WEN
          Nov 21 '18 at 15:16


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53413706%2flarge-wgan-gp-train-loss%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Create new schema in PostgreSQL using DBeaver

          Deepest pit of an array with Javascript: test on Codility

          Costa Masnaga