Effective Data Pruning through Score Extrapolation
Effective Data Pruning through Score Extrapolation
✂️ Preprint, 2025
Overview
Training on massive datasets is prohibitively expensive. Data pruning can help — but existing methods need a full training pass just to figure out which samples to remove, defeating the purpose for single training runs.
We introduce Score Extrapolation, a framework that predicts sample importance for the entire dataset after training on only a small subset. Using k-nearest neighbors or graph neural networks, we extrapolate importance scores to samples never seen during the initial training — making data pruning efficient from the very first run.
Key Results
✅ Validated on 2 SOTA pruning methods (Dynamic Uncertainty & TDDS)
📊 Tested across 4 datasets: CIFAR-10, CIFAR-100, Places-365, and ImageNet
🔄 Works for 3 training paradigms: supervised, unsupervised, and adversarial
Key Contributions
🧠 Score extrapolation framework — predicts sample importance without full training
🔮 Two approaches — kNN-based and GNN-based extrapolation methods
📈 Scalable — a promising direction for scaling expensive score calculations (pruning, data attribution, and beyond)
Why It Matters
Data pruning should save compute, not cost more. Score extrapolation breaks the chicken-and-egg problem — enabling efficient pruning from day one, with implications beyond pruning for any task requiring sample-level scoring.
Training advanced machine learning models demands massive datasets, resulting in prohibitive computational costs. To address this challenge, data pruning techniques identify and remove redundant training samples while preserving model performance. Yet, existing pruning techniques predominantly require a full initial training pass to identify removable samples, negating any efficiency benefits for single training runs. To overcome this limitation, we introduce a novel importance score extrapolation framework that requires training on only a small subset of data. We present two initial approaches in this framework - k-nearest neighbors and graph neural networks - to accurately predict sample importance for the entire dataset using patterns learned from this minimal subset. We demonstrate the effectiveness of our approach for 2 state-of-the-art pruning methods (Dynamic Uncertainty and TDDS), 4 different datasets (CIFAR-10, CIFAR-100, Places-365, and ImageNet), and 3 training paradigms (supervised, unsupervised, and adversarial). Our results indicate that score extrapolation is a promising direction to scale expensive score calculation methods, such as pruning, data attribution, or other tasks.
@article{Schmidt2025c,title={Effective Data Pruning through Score Extrapolation},author={Schmidt, Sebastian and Dhungel, Prasanga and L\"{o}ffler, Christoffer and Nieth, Bj\"{o}rn and G\"{u}nnemann, Stephan and Schwinn, Leo},year={2025},month=jun,journal={ArXiv},volume={2506.09010},url={http://arxiv.org/abs/2506.09010},}