Skip to main content

Simulations, Data Analysis and Algorithms

Computational Astrophysics and Cosmology Cover Image

Table 2 Hyper-parameters used in our GAN implementations. Adam (Kingma et al. 2014) is the algorithm used to estimate the gradient in our models

From: Fast cosmic web simulations with generative adversarial networks

Hyperparameter GAN Description
Standard Wasserstein-1
Batch size 16 16 Number of training samples used to compute the gradient at each update
z dimension 200 100 Dimension of the gaussian prior distribution
Learning rate D 110−5 110−5 Discriminator learning rate used by the Adam optimizer
\(\beta_{1}\) 0.5 0.5 Exponential decay for the Adam optimizer
\(\beta_{2}\) 0.999 0.999 Exponential decay for the Adam optimizer
Learning rate G 110−5 110−8 Generator learning rate used by the Adam optimizer
Gradient penalty - 1000 Gradient penalty applied for Wasserstein-1
a 4 4 Parameter in s(x) to obtain the scaled images
\