Neural Architecture Search (NAS) has been successfully used to automate the design of deep neural network architectures, achieving results that outperform hand-designed models in many computer vision tasks. While these recent works are opening up new paths forward, our understanding on why these specific architectures work well, how similar the architectures derived from different search strategies, how to design the search space, how to search the space in an efficient way and in an unsupervised way, and how to fairly evaluate different auto-designed architectures remains far from complete. One goal of this workshop is to bring together emerging research in the areas of automatic architecture search, optimization, hyperparameter optimization, data augmentation, representation learning and computer vision to discuss open challenges and opportunities ahead.
Parameter sharing based OneshotNAS approaches can significantly reduce the training cost. However, there are still three issues to be urgently solved in the development of lightweight NAS. First, the performance of the network sampled from the supernet is inconsistent with the performance of the same network trained independently. This results in an incorrect evaluation and improper ranking of candidate performance. Second, the existing performance prediction benchmarks usually focus on the evaluation of networks from difference search spaces has been not explored. This makes the trained prediction model less practical in real-world applications. Thus, another goal of this workshop is to benchmark lightweight NAS in a systematic and realistic approach. We introduce to make a step forward in advancing the state-of-the-art in the field of lightweight NAS. We encourage participants to propose novel solutions to advance the state-of-the-art. The workshop will represent thorough quantitative evaluation on the topic of lightweight NAS.