Dataset condensation always faces a constitutive trade-off: balancing performance and fidelity under extreme compression. Existing methods struggle with two bottlenecks: image-level selection methods (Coreset Selection, Dataset Quantization) suffer from inefficiency condensation, while pixel-level optimization (Dataset Distillation) introduces semantic distortion due to over-parameterization. With empirical observations, we find that a critical problem in dataset condensation is the oversight of color's dual role as an information carrier and a basic semantic representation unit. We argue that improving the colorfulness of condensed images is beneficial for representation learning. Motivated by this, we propose DC3: a Dataset Condensation framework with Color Compensation. After a calibrated selection strategy, DC3 utilizes the latent diffusion model to enhance the color diversity of an image rather than creating a brand-new one. Extensive experiments demonstrate the superior performance and generalization of DC3 that outperforms SOTA methods across multiple benchmarks. To the best of our knowledge, besides focusing on downstream tasks, DC3 is the first research to fine-tune pre-trained diffusion models with condensed datasets. The FID results prove that training networks with our high-quality datasets is feasible without model collapse or other degradation issues. Code and generated data will be released soon.
Inspired by aesthetic adjustments in photographic post-processing, we propose Color Compensation, a mechanism integrating hue-related instructions into latent diffusion models to improve color diversity, as well as alleviate the Color Homogenization during dataset condensation.
Comparison between conventional and diffusion-based image processes. The diffusion-based approach achieves semantic-aware color reasoning, mitigating distortions caused by mathematical transformations in traditional pipelines.
Color compensation enhances the diversity of the images, but we need to know which images require compensation, as compensating the entire dataset is computational. Thus, we introduce submodular sampling as a feasible solution.
T-SNE visualizations of intra-class clusters on ImageNet. Submodular sampling selects the representative samples to preserve maximum semantic integrity and feature diversity.
Dataset | IPC | Matching-based | Generation-based | Full | |||||
---|---|---|---|---|---|---|---|---|---|
TESLA† | SRe²L | RDED† | Minimax | D⁴M | IGD | DC3 | |||
ImageNet-1K | 1 | 7.7±0.2 | 0.1±0.1 | 6.6±0.2 | - | - | - | 8.1±0.3 | 69.8 |
10 | 17.8±1.3 | 21.3±0.6 | 42.0±0.1 | 44.3±0.5 | 27.9±0.9 | 46.2±0.6 | 50.4±0.2 | ||
50 | 27.9±1.2 | 46.8±0.2 | 56.5±0.1 | 58.6±0.3 | 55.2±0.4 | 60.3±0.4 | 62.3±0.7 |
Top-1 Accuracy↑ on ImageNet-1K. The results of DC3 and other state-of-the-art methods are evaluated on ResNet-18. †: The results of TESLA are evaluated on ConvNet.
Dataset | IPC | RDED | Minimax | D⁴M | IGD | DC3 |
---|---|---|---|---|---|---|
ImageWoof | 1 | 20.8±1.2 | - | - | - | 23.2±1.4 |
10 | 38.5±2.1 | 37.6±0.9 | 42.9±1.1 | 47.2±1.6 | 48.7±0.1 | |
50 | 68.5±0.7 | 57.1±0.6 | 62.1±0.4 | 65.4±1.8 | 72.4±0.6 | |
ImageNette | 1 | 35.8±1.0 | 32.1±0.3 | 34.1±0.9 | - | 37.6±0.9 |
10 | 61.4±0.4 | 62.0±0.2 | 68.4±0.8 | 66.2±1.2 | 84.8±1.1 | |
50 | 80.4±0.4 | 76.6±0.2 | 81.3±0.1 | 82.0±0.3 | 89.8±0.2 | |
Tiny-ImageNet | 1 | 9.7±0.4 | 13.3±0.8 | 15.1±0.2 | - | 20.0±1.2 |
10 | 41.9±0.2 | 39.2±0.7 | 35.7±0.6 | - | 45.1±1.1 | |
50 | 58.2±0.1 | 44.8±0.2 | 46.2±1.1 | - | 59.4±0.7 |
Top-1 Accuracy↑ on ImageNet subsets: ImageWoof, ImageNette, and Tiny-ImageNet. All of the results are evaluated by ResNet-18. DC3 holds the optimal performance among datasets and methods.
Dataset | IPC | RDED | SRe²L | DataDAM | D⁴M | DC3 |
---|---|---|---|---|---|---|
CIFAR-10 | 1 | 22.9±0.4 | 16.9±0.9 | 32.0±1.2 | 17.6±1.1 | 25.6±1.2 |
10 | 37.1±0.3 | 27.2±0.5 | 54.2±0.8 | 51.5±0.5 | 57.8±0.8 | |
50 | 62.1±0.1 | 47.5±0.6 | 67.0±0.4 | 62.3±0.2 | 80.9±0.5 | |
CIFAR-100 | 1 | 11.0±0.3 | 2.0±0.2 | 14.5±0.5 | 5.7±1.4 | 21.1±1.6 |
10 | 42.6±0.2 | 23.5±0.8 | 34.8±0.5 | 44.0±0.3 | 57.4±0.2 | |
50 | 62.6±0.1 | 51.4±0.8 | 49.4±0.3 | 48.4±0.9 | 64.2±0.5 |
Top-1 Accuracy↑ on CIFAR-10 and CIFAR-100. The results are evaluated by ResNet-18.
@article{wu2025dataset,
title={Dataset Condensation with Color Compensation},
author={Wu, Huyu and Su, Duo and Hou, Junjie and Li, Guang},
journal={arXiv preprint arXiv:2508.01139},
year={2025}
}