Taming Normalizing Flows

1Tel Aviv University, 2Reichman University

Toy example

Toy Example

Abstract

We propose an algorithm for taming Normalizing Flow models — changing the probability that the model will produce a specific image or image category.

We focus on Normalizing Flows because they can calculate the exact generation probability likelihood for a given image. We demonstrate taming using models that generate human faces, a subdomain with many interesting privacy and bias considerations.

Our method can be used in the context of privacy, e.g., removing a specific person from the output of a model, and also in the context of debiasing by forcing a model to output specific image categories according to a given target distribution.

Taming is achieved with a fast fine-tuning process without retraining the model from scratch, achieving the goal in a matter of minutes. We evaluate our method qualitatively and quantitatively, showing that the generation quality remains intact, while the desired changes are applied.

Forget an identity

In our work, we show we can reduce the generation probablity of a specific identity.
This is shown in the table below, showing that we can reach a generation probability threshold, while reducing the likelihood on unseen images of that identity, and retaining the probability on the rest of the space.
Toy Example

Increase the attributes "Blond Hair" and "Smiling"

The video below shows the process of emphasizing the attributes "Blond Hair" and "Smiling".
As our method depends on the data we use, we see that since there aren't many blond males in the data, the results are less significant on males for this scenario (see identity in 2nd row, 1st column).

BibTeX


@article{malnick2023taming,
  title={Taming Normalizing Flows},
  author = {Malnick, Shimon and Avidan, Shai and Fried, Ohad},
  journal={arXiv preprint arXiv:2211.16488},
  year={2023}
}