Abstract: Generative adversarial networks (GAN) have been applied successfully in medical image analysis, including data augmentation and image-to-image translation. Limited by memory, most current GAN models, especially 3D GANs, are trained on low resolution medical images. In this work, we propose a novel end-to-end GAN architecture that can be trained on 3D high resolution images. The key idea is to introduce subsample layer to reduce the dimensionality of feature maps in the generator in the training process. During test time, the subsample layer can be removed to directly generate full volume of image at 256^3. An encoder is incorporated into the model to learn feature representation of images for disease severity prediction. Experiments performed on 3D thorax CT and brain MRI demonstrates that our approach can generate images of better quality than baseline models.