Pretrained Vision-Language-Action Models are Surprisingly Resistant to Forgetting in Continual Learning
arXiv preprint, 2026
@article{liu2026continualvla,
title={Pretrained Vision-Language-Action Models are Surprisingly Resistant to Forgetting in Continual Learning},
author={Liu, Huihan and Kim, Changyeon and Liu, Bo and Liu, Minghuan and Zhu, Yuke},
journal={arXiv preprint},
year={2026}
}