MEng Thesis: Fine-tuning Generative Models


Deep generative models have emerged as a powerful modeling paradigm for making sense of large amounts of unlabeled real-world data. In particular, the representations produced by these models have proven to be useful both in improving human understanding of the factors of variation in the original dataset and in downstream tasks such as classification. Most current algorithms, however, require training a bespoke model from scratch, which can be both expensive and time-consuming. Instead, we propose various methods of fine-tuning pre-trained generative models to achieve these goals, and evaluate these methods quantitatively on few-shot classification and interpretability tasks.