You only need to configure one file to support model heterogeneity. Consistent GPU memory usage for single or multiple clients.
-
Updated
Sep 1, 2025 - Python
You only need to configure one file to support model heterogeneity. Consistent GPU memory usage for single or multiple clients.
Code and pretrained models for paper: Data-Free Adversarial Distillation
[IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation
Official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021.
Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training
[ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers
Adaptive Guidance for Local Training in Heterogeneous Federated Learning
The official implementation of "DataFreeShield: Defending Adversarial Attacks without Training Data" accepted in ICML 2024.
[ICCV 2023] "TRM-UAP: Enhancing the Transferability of Data-Free Universal Adversarial Perturbation via Truncated Ratio Maximization", Yiran Liu, Xin Feng, Yunlong Wang, Wu Yang, Di Ming*
In this repo you will understand .The process of reducing the precision of a model’s parameters and/or activations (e.g., from 32-bit floating point to 8-bit integers) to make neural networks smaller, faster, and more energy-efficient with minimal accuracy loss.
Add a description, image, and links to the data-free topic page so that developers can more easily learn about it.
To associate your repository with the data-free topic, visit your repo's landing page and select "manage topics."