Fifth workshop on Neural Architecture Search
 

                                                     
Paper submission:

We invite submissions of two types: workshop proceedings and extended abstracts. We invite submissions on any aspect of 
NAS and beyond for Representation Learning in Vision and Beyond. This includes, but is not  limited to:
     • Theoretical frameworks and novel objective functions for representation learning
     • Novel network architectures and training protocols
     • Adaptive multi-task and transfer learning
     • Multi-objective optimization and parameter estimation methods
     • Reproducibility in neural architecture search
     • Resource constrained architecture search
     • Automatic data augmentation and hyperparameter optimization
     • Unsupervised learning, domain transfer and life-long learning
     • Computer vision datasets and benchmarks for neural architecture search
     • Searching algorithms  and evaluation strategies of  neural architecture search
     • Consistency issue of Oneshot-NAS
     • Probabilistic based neural architecture search
     • Search space design of neural architecture search

Important Dates:

For workshop proceedings (4-8 pages excluding references),
    • Paper Submission Deadline(format should be the final Camera-ready format as in the layout specified by CVPR and
       submissions need to be anonymized): March 14th, 2024 (11:59 p.m. PST)
    • Notification to Authors: March 31th, 2024 (11:59 p.m. PST)
    • Camera-ready Paper Deadline: April 8th, 2024 (11:59 p.m. PST)
    • Submission Guidelines: The submissions should follow the same policy as the main conference
For extended abstracts (4 pages including references),
    • Paper Submission Deadline(Camera-ready version is required): May 25, 2024 (11:59 p.m. PST)
    • Notification to Authors: June 6, 2024 (11:59 p.m. PST)
    • Submission Guidelines: We solicit short papers in the length of 4 pages (including references) and accepted papers will be linked 
      online at the workshop webpage. Submitted works can be shorter versions of work presented at the main conference or 
      work in progress on relevant topics of the workshop. Each paper accepted to the workshop will be allocated either a contributed 
      talk or a poster presentation and one paper will be awarded as the best paper, recommended during the peer review period by the 
      workshop program chairs.

Manuscripts should follow the CVPR 2024 paper template and should be submitted through the CMT link below.  
    • Paper submission Link:  https://cmt3.research.microsoft.com/NAS2024
    • Review process: Double-blind (i.e., submissions need to be anonymized)
    • Supplementary Materials: Authors can optionally submit supplemental materials for the paper via CMT. 





 



Accepted papers on CVPR 2023 NAS workshop:

Accepted proceedings papers:
 https://openaccess.thecvf.com/CVPR2023_workshops/NAS

Accepted papers on CVPR 2022 NAS workshop:

Accepted proceedings papers:
 https://openaccess.thecvf.com/CVPR2022_workshops/NAS

Winner Solutions:
■    First Place Solution of Track 1
        Yang Zhang, Meixi Liu, He Wei, Zhen Hou, Yangyang Tang, Haiyang Wu, Yuekui Yang [PDF]                                                                     
■    Second Place Solution of Track 1
        Zhaokai Zhang, He Cai, Chunnan Sheng, Lamei Chen, Tianpeng Feng*, Yandong Guo [PDF]      
■    Third Place Solution of Track 1
       Peijie Dong, Xin Niu, Lujun Li, Linzhen Xie, Wenbin Zou, Tian Ye, Zimian Wei, Hengyue Pan [PDF]     
■    First Place Solution of Track 2
       Ke Zhang [PDF]                                                                      
■    Second Place Solution of Track 2
        Kunlong Chen, Liu Yang, Yitian Chen, Kunjin Chen, Yidan Xu, Lujun Li [PDF]                                                                        
■     Third Place Solution of Track 2
        Di He [PDF]

Accepted extended abstracts paper:
■  Learning Where To Look – Generative NAS is Surprisingly Efficient                                                                        
    Jovita Lukasik* Steffen,  Jung* Margret Keuper [PDF]          
■    Improve Ranking Correlation of Super-net through Training Scheme from One-shot NAS to Few-shot NAS:                                                                        
     Jiawei Liu*, Kaiyu Zhang*, Weitai Hu*, and Qing Yang [PDF]
■   LC-NAS: Latency Constrained Neural Architecture Search for Point Cloud Networks:
     Guohao Li, Mengmeng Xu, Silvio Giancola, Ali Thabet [PDF]                                                                                                             
■   Trilevel Neural Architecture Search for Efficient Single Image Super-Resolution:
     Yan Wu, Zhiwu Huang, Suryansh Kumar, Rhea Sanjay Sukthanker, Radu Timofte, Luc Van Goo [PDF]
  Long-term Reproducibility for Neural Architecture Search:      
     David Towers, Matthew Forshaw, A. Stephen McGough, Amit Atapour-Abarghouei  [PDF]                                                                      
                                                                 

Accepted papers on CVPR 2021 NAS workshop:

■    Improving Ranking Correlation of Supernet with Candidates Enhancement and Progressive Training: 
      Ziwei Yang, Ruyi Zhang, Zhi Yang, Xubo Yang, Lei Wang and Zheyang Li [PDF]
■    One-Shot Neural Channel Search: WhatWorks and What’s Next: Chaoyu Guan, Yijian Qin, Zhikun Wei, Zeyang Zhang, 
      Zizhao Zhang, Xin Wang, and Wenwu Zhu [PDF]
■    Semi-Supervised Accuracy Predictor: SemiLGB:Hai Li, Yang Li and Zhengrong Zhuo [PDF]
   Cascade Bagging for Accuracy Prediction with Few Training Samples:Ruyi Zhang,Ziwei Yang, Zhi Yang, Xubo Yang,   
      Lei Wang and Zheyang Li [PDF]
■    A Platform-based Framework for the NAS Performance Prediction Challenge:Haocheng Wang, Yuxin Shen, Zifeng Yu, 
      Guoming Sun, Xiaoxing Chen and Chenhan Tsai [PDF]
■    AutoAdapt: Automated Segmentation Network Search for Unsupervised Domain: Xueqing Deng, Yuxin Tian, 
      Shawn Newsam and Yi Zhu [PDF]
■    NAS-Bench-x11 and the Power of Learning Curves:Shen Yan, Colin White, Yash Savani and Frank Hutter [PDF]  
■    Bag of Tricks for Neural Architecture Search:Thomas Elsken, Benedikt Staffler, Arber Zela, Jan Hendrik Metzen 
      and Frank Hutter [PDF]
■    Group Sparsity: A Unified Framework for Network Pruning and Neural Architecture Search:
      Avraam Chatzimichailidis,Arber Zela, Shalini Shalini, Peter Labus,Janis Keuper, Frank Hutter and Yang Yang [PDF]