Neural Architecture Search (NAS) has recently become an interesting area of deep learning research, offering promising results. One such approach, Vanilla NAS, uses search techniques to explore the search space and evaluate new architectures by training them from scratch. However, this may require thousands of GPU hours, leading to a very high computing cost for many research applications.

Researchers often utilize another approach, one-shot NAS, to substantially lower the computing cost using a supernet. A supernet is capable of approximating the accuracy of neural architectures in the search space without being trained from scratch. But its search can be hampered by inaccurate predictions from the supernet, thus, making it hard to identify suitable architectures.

Quick Read:



Source link