Mamba®: Vision Mamba ALSO Needs Registers

Feng Wang1, Jiahao Wang1, Sucheng Ren1, Guoyizhe Wei1, Jieru Mei1, Wei Shao2, Yuyin Zhou3, Alan Yuille1, Cihang Xie3,
1Johns Hopkins University, 2University of Florida, 3UC, Santa Cruz,

Framework of Mamba®

alt text

We address Vision Mamba's artifact issues by evenly inserting input-independent register tokens into the input sequence. In the final layer, we concatenate the output of register tokens to form a global representation for final predictions.


Abstract

Similar to Vision Transformers, this paper identifies artifacts also present within the feature maps of Vision Mamba. These artifacts, corresponding to high-norm tokens emerging in low-information background areas of images, appear much more severe in Vision Mamba---they exist prevalently even with the tiny-sized model and activate extensively across background regions. To mitigate this issue, we follow the prior solution of introducing register tokens into Vision Mamba. To better cope with Mamba blocks' uni-directional inference paradigm, two key modifications are introduced: 1) evenly inserting registers throughout the input token sequence, and 2) recycling registers for final decision predictions. We term this new architecture Mamba®. Qualitative observations suggest, compared to vanilla Vision Mamba, Mamba®'s feature maps appear cleaner and more focused on semantically meaningful regions. Quantitatively, Mamba® attains stronger performance and scales better. For example, on the ImageNet benchmark, our Mamba®-B attains 82.9% accuracy, significantly outperforming Vim-B's 81.8%; furthermore, we provide the first successful scaling to the large model size (i.e., with 341M parameters), attaining a competitive accuracy of 83.2% (84.5% if finetuned with 384x384 inputs). Additional validation on the downstream semantic segmentation task also supports Mamba®'s efficacy.


Massive artifacts in Vision Mamba

alt text

Feature maps of vanilla Vision Mamba (Vim) exhibits massive artifacts appear in its feature map, making it difficult to attend to visually meaningful content within the image. In contrast, our model exhibits much cleaner feature activations, showcasing the significant efficacy of our enhanced architectural design.


Feature maps for different registers

alt text

The registers sometimes can attend to different parts or semantics with an image. Similar to the multi-head self-attention mechanism, this property is not required but naturally emerges from training.


Artifacts correspond to hihg normalization

alt text

Distributions of normalization values of local outputs across different layers. It quantitatively shows that our Mamba® effectively reduces the number of high-norm outliers.