Skip to main content

Table 2 Discriminator network architecture: layer types, activations, output shapes (channels × height × width) and number of trainable parameters for each layer. All convolutional layers have stride =2. LeakyReLU’s leakines =0.2

From: CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks

 

Activ.

Output shape

Params.

Input map

–

1 × 256 × 256

–

Conv 5 × 5

LReLU

64 × 128 × 128

1664

Conv 5 × 5

–

128 × 64 × 64

205K

BatchNorm

LReLU

128 × 64 × 64

256

Conv 5 × 5

–

256 × 32 × 32

819K

BatchNorm

LReLU

256 × 32 × 32

512

Conv 5 × 5

–

512 × 16 × 16

3.3M

BatchNorm

LReLU

512 × 16 × 16

1024

Linear

Sigmoid

1

131K

Total trainable parameters

4.4M