From: Fast cosmic web simulations with generative adversarial networks
Hyperparameter | GAN | Description | |
---|---|---|---|
Standard | Wasserstein-1 | ||
Batch size | 16 | 16 | Number of training samples used to compute the gradient at each update |
z dimension | 200 | 100 | Dimension of the gaussian prior distribution |
Learning rate D | 1⋅10−5 | 1⋅10−5 | Discriminator learning rate used by the Adam optimizer |
\(\beta_{1}\) | 0.5 | 0.5 | Exponential decay for the Adam optimizer |
\(\beta_{2}\) | 0.999 | 0.999 | Exponential decay for the Adam optimizer |
Learning rate G | 1⋅10−5 | 1⋅10−8 | Generator learning rate used by the Adam optimizer |
Gradient penalty | - | 1000 | Gradient penalty applied for Wasserstein-1 |
a | 4 | 4 | Parameter in s(x) to obtain the scaled images |