View on GitHub

MAE-CT (Contrastive Tuning)

A Little Help to Make Masked Autoencoders Forget

[Code] [arXiv] [Models] [BibTeX] [Follow-up work (MIM-Refiner)]

Masked AutoEncoder: Contrastive Tuning tunes the representation of a pre-trained MAE to form semantic clusters via a NNCLR training stage.

maect_schematic lowshot_vitl