Neural architecture search (NAS) can automatically discover well-performing architectures in a large search space and has been shown to bring improvements to various applications. However, the computational burden of NAS is huge, since exploring a large search space can need evaluating more than thousands of architecture samples. To improve the sample efficiency of search space exploration, predictor-based NAS methods learn a performance predictor of architectures, and utilize the predictor to sample worth-evaluating architectures. The encoding scheme of NN architectures is crucial to the predictor’s generalization ability, and thus crucial to the efficacy of the NAS process. To this end, we have designed a generic Graph-based neural ArchiTecture Encoding Scheme (GATES), a more reasonable modeling of NN architectures that mimics their data processing. Nevertheless, GATES is unaware of the concrete computing semantic of NN operations or architectures. Thus, the learning of operation embeddings and weights in GATES can only exploit the information in architectures-performance pairs. We propose GATES++, which incorporates multifaceted information about NN’s operation-level and architecture-level computing semantics into its construction and training, respectively. Experiments on benchmark search spaces show that both the operation-level and architecture-level information can bring improvements alone, and GATES++ can discover better architectures after evaluating the same number of architectures.
This journal paper is the extension of the conference paper GATES@ECCV'20.