I am using a model very similar to the pix2pix model (https://arxiv.org/pdf/1611.07004.pdf). I don't feed a random noise vector to the "generator", as it works just fine without. But I condition on some input parameters. Is it wrong to call it a generative model and the first network a generator when I don't use this vector, as usually generative models are defined by this vector? Should it be instead called adversarial training? I use dropouts in the generator network, which could be considered a source of stochasticity. And also, in my mind, it is called a generator, as it is the network that generates fakes images, as opposed to the discriminator. I would be happy to hear what you think.
It is more of a theoretical question.
Aucun commentaire:
Enregistrer un commentaire