A research team from Microsoft, Zhejiang University, Johns Hopkins University, Georgia Institute of Technology and University of Denver proposes Only-Train-Once (OTO), a one-shot DNN training and pruning framework that produces a slim architecture from a full heavy model without fine-tuning while maintaining high performance.

Here is a quick read: Only Train Once: SOTA One-Shot DNN Training and Pruning Framework.

The paper Only Train Once: A One-Shot Neural Network Training And Pruning Framework is on arXiv.



Source link