Energy-based models (EBMs) are a flexible class of deep generative models and are well-suited to capture complex dependencies in multimodal data. However, learning multimodal EBM by maximum likelihood requires Markov Chain Monte Carlo (MCMC) sampling in the joint data space, where noise-initialized Langevin dynamics often mixes poorly and fails to discover coherent inter-modal relationships. Multimodal VAEs have made progress in capturing such inter-modal dependencies by introducing a shared latent generator and a joint inference model. However, both are parameterized as unimodal Gaussian (or Laplace), which severely limits their ability to approximate the complex structure induced by multimodal data.
In this work, we study the joint learning problem of the multimodal EBM, shared latent generator, and joint inference model. We present a framework that effectively interweaves their MLE updates with corresponding MCMC refinements in both the data space and the latent space. The generator learns to produce coherent multimodal samples that serve as strong initial states for EBM sampling, while the inference model provides informative latent initializations for generator posterior sampling. Together, these complementary models enable effective EBM sampling and learning, yielding realistic and coherent multimodal outputs. Extensive experiments demonstrate superior synthesis quality and coherence over various baselines, with ablation studies validating the effectiveness and scalability of the proposed framework.
Coming soon.