For example, x is fed into network A and output y. Then y is used to train a new network B, after B is trained up to a specific iteration, evaluate B on x and output a loss c, the loss c is used as the loss for training network A. In short, I will define a loss that is a trainable network based on the output of previous network. Is there any way to define such loss in tensorflow? Thanks.
Not sure if this works, but you could try the following: once you computed the loss c
, you could feed this value back into network A
using a placeholder. Then assign this placeholder to a non-trainable Variable inside model A
. This assign operation will then be the "training operation" which you can feed into an optimizer and fetch in the optimization step. If both networks are loaded in the same graph, you probably won't even need the workaround through the placeholder.
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments