Back to Research

REPAIR

Robust Editing via Progressive Adaptive Intervention and Reintegration

Authors

Yisu Wang, Ming Wang, Haoyuan Song, Wenjie Huang, Chaozheng Wang, Yi Xie, Xuming Ran

Date Published

2 October 2025

Abstract

Post-training large language models cannot easily integrate new facts or correct mistakes without expensive retraining. Sequential edits risk instability, poor generalization, and harmful ripple effects across the model's knowledge base. We present REPAIR, a novel framework that combines closed-loop feedback for monitoring edits, distribution-aware optimization through intra-batch distillation, and frequent knowledge fusion with locality guardrails. Our approach ensures precise, consistent, and scalable updates across diverse model architectures. Extensive experiments on LLaMA-3, Qwen-2.5, DeepSeek, and GPT-2-XL demonstrate 15-30% improvements in editing accuracy while reducing hallucinations and preserving unrelated knowledge. REPAIR represents a significant step toward reliable post-training model updates without catastrophic retraining costs.

REPAIR Framework Architecture showing closed-loop feedback, distribution-aware optimization, and knowledge fusion

Cite This Work

@article{repair2025,
        title={REPAIR: Robust Editing via Progressive Adaptive Intervention and Reintegration},
        author={Yisu Wang and Ming Wang and Haoyuan Song and Wenjie Huang and Chaozheng Wang and Yi Xie and Xuming Ran},
        archivePrefix={arVix},
        year={2025}
        }
ContiAI - Building Dynamic AI