SuperCap: Multi-resolution Superpixel-based Image Captioning
Published in arXiv, 2025
Abstract
It has been a longstanding goal within image captioning to move beyond a dependence on object detection. We investigate using superpixels coupled with Vision Language Models (VLMs) to bridge the gap between detector-based captioning architectures and those that solely pretrain on large datasets. Our novel superpixel approach ensures that the model receives object-like features whilst the use of VLMs provides our model with open set object understanding. Furthermore, we extend our architecture to make use of multi-resolution inputs, allowing our model to view images in different levels of detail, and use an attention mechanism to determine which parts are most relevant to the caption. We demonstrate our model’s performance with multiple VLMs and through a range of ablations detailing the impact of different architectural choices. Our full model achieves a competitive CIDEr score of 136.9 on the COCO Karpathy split.
Citation
@article{senior2025supercap,
title={SuperCap: Multi-resolution Superpixel-based Image Captioning},
author={Senior, Henry and Rossi, Luca and Slabaugh, Gregory and Yuan, Shanxin},
journal={arXiv preprint arXiv:2503.08496},
year={2025}
}
}
Key Takeaway
We demonstrate that by using VLM features of superpixel regions, we can outperform models using either BUTD object detection data or patch features generated with the same VLM. SuperCap doubles down on this advancement to implement multiresolution support into the captioning model’s encoder to enable the model to generate a stronger representation of the image.
Citation: Senior, H., Slabaugh, G., Rossi, L., & Yuan, S. (2025). SuperCap: Multi-resolution Superpixel-based Image Captioning. arXiv https://arxiv.org/abs/2503.08496v1