Learning Hierarchical Features with Joint Latent Space Energy-Based Prior

1 Stevens Institute of Technology, SIT
2 University of California, Los Angeles, UCLA

Abstract

This paper studies the fundamental problem of multi-layer generator models in learning hierarchical representations. The multi-layer generator models that consist of multiple layers of latent variables organized in a top-down architecture tend to learn multiple levels of data abstraction. However, such multi-layer latent variables are typically parameterized to be Gaussian, which can be less informative in capturing complex abstractions, resulting in limited success in hierarchical representation learning. In this paper, we propose to learn a joint latent space EBM prior model for effective hierarchical representation learning. To ensure efficient inference, we develop a variational joint training scheme where an inference model is seamlessly integrated and jointly trained with the generator model. Our experiments demonstrate that the proposed joint EBM prior is effective and expressive in capturing hierarchical representations and modelling complex data distribution.


    @InProceedings{Cui_2023_ICCV,
        author    = {Cui, Jiali and Wu, Ying Nian and Han, Tian},
        title     = {Learning Hierarchical Features with Joint Latent Space Energy-Based Prior},
        booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
        month     = {October},
        year      = {2023},
        pages     = {2218-2227}
    }
    

Poster