Assumptions on the distribution of the latents (with auxiliary data)
- Variational Autoencoders and Nonlinear ICA: A Unifying Framework
- Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA
Weak supervision with temporal samples
- Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning
- Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICA
Weak supervision with paired samples (i.e., counterfactuals) arising from perturbations
- Weakly-Supervised Disentanglement Without Compromises
- Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style
- Weakly Supervised Representation Learning with Sparse Perturbations
- Weakly supervised causal representation learning
- The Incomplete Rosetta Stone Problem: Identifiability Results for Multi-View Nonlinear ICA
Distributional assumptions using interventions
- BISCUIT: Causal Representation Learning from Binary Interactions
- CITRIS: Causal Identifiability from Temporal Intervened Sequences
- Interventional Causal Representation Learning
- Nonparametric Identifiability of Causal Representations from Unknown Intervention
- Linear Causal Disentanglement via Interventions
- Learning Linear Causal Representations from Interventions under General Nonlinear Mixing
Constraints on the decoding function
- On the Identifiability of Nonlinear ICA: Sparsity and Beyond
- Identifiable Deep Generative Models via Sparse Decoding