Sparse Structure Exploration and Re-optimization for Vision Transformer
-
발행년도
2025
-
저널명
41st Conference on Uncertainty in Artificial Intelligence (UAI)
-
저자
Sangho An , Jinwoo Kim , Keonho Lee , Jingang Huh , Chanwoong Kwak , Yujin Lee , Moonsub Jin , Jangho Kim
초록
Vision Transformers (ViTs) achieve outstanding performance by effectively capturing long-range dependencies between image patches (tokens). However, the high computational cost and memory requirements of ViTs present challenges for model compression and deployment on edge devices. In this study, we introduce a new framework, Sparse Structure Exploration and Re-optimization (SERo), specifically designed to maximize pruning efficiency in ViTs. Our approach focuses on (1) hardware-friendly pruning that fully compresses pruned parameters instead of zeroing them out, (2) separating the exploration and re-optimization phases \red{in order to find the optimal structure among various possible sparse structures}, and (3) using a simple gradient magnitude-based criterion for pruning a pre-trained model. SERo iteratively refines pruning masks to identify optimal sparse structures and then re-optimizes the pruned structure, reducing computational costs while maintaining model performance. Experimental results indicate that SERo surpasses existing pruning methods across various ViT models in both performance and computational efficiency. For example, SERo achieves a 69% reduction in computational cost and a 2.4x increase in processing speed for DeiT-Base model, with only a 1.55% drop in accuracy.