ASAP: Attention-Shift-Aware Pruning for Efficient LVLM Inference | ScienceToStartup | ScienceToStartup