Xuefei Ning
Xuefei Ning
Home
Updates
Publications
Talks
Light
Dark
Automatic
1
[ECCV'22] CLOSE: Curriculum Learning On the Sharing Extent Towards Better One-shot NAS
To improve one-shot NAS, we apply curriculum learning to the sharing extent in one-shot supernet to achieve better ranking correlation.
Zixuan Zhou
,
Xuefei Ning
,
Yi Cai
,
Jiashu Han
,
Yiping Deng
,
Yuhan Dong
,
Huazhong Yang
,
Yu Wang
Last updated on May 7, 2024
PDF
Cite
Code
Slides
Website
[CVPR'22] CodedVTR: Codebook-based Sparse Voxel Transformer with Geometric Guidance
We design an attention block that projects the attention vector into a subspace represented by the combination of learnable prototypes. The spatial supports of prototypes are designed to have different dilations and shapes (clustered using the training data), considering the irregularity of 3D data.
Tianchen Zhao
,
Niansong Zhang
,
Xuefei Ning
,
He Wang
,
Li Yi
,
Yu Wang
Last updated on May 7, 2024
PDF
Cite
Code
Slides
[NeurIPS'21] Evaluating Efficient Performance Estimators of Neural Architectures
We study one-shot performance estimators and 8 types of zero-shot estimators on 5 different benchmarks (NB101, 201, 301, NDS ResNet, ResNeXt-A).
Xuefei Ning
,
Changcheng Tang
,
Wenshuo Li
,
Zixuan Zhou
,
Shuang Liang
,
Huazhong Yang
,
Yu Wang
Last updated on May 7, 2024
PDF
Cite
Code
Poster
Slides
Website
[NeurIPS'22] TA-GATES: An Encoding Scheme for Neural Network Architectures
TA-GATES is an encoding scheme specially designed for neural architectures, considering their distinguishing properties as DAGs with trainable operations. TA-GATES encodes an architecture by mimicking its training process, and thereby provides more discriminative architecture-level and operation-level encodings.
Xuefei Ning
,
Zixuan Zhou
,
Junbo Zhao
,
Tianchen Zhao
,
Yiping Deng
,
Changcheng Tang
,
Shuang Liang
,
Huazhong Yang
,
Yu Wang
Last updated on May 7, 2024
PDF
Cite
Code
Slides
Website
[ECCV'20] A Generic Graph-based Neural Architecture Encoding Scheme for Predictor-based NAS]
In order to improve the sample efficiency of NAS, we follow the line of predictor-based NAS and improve the encoder design and training of the predictor. (1) We design a Generic Graph-based neural ArchiTecture Encoding Scheme (GATES) to better encode NN architectures. (2) We propose to use ranking loss to train the predictor.
Xuefei Ning
,
Yin Zheng
,
Tianchen Zhao
,
Yu Wang
,
Huazhong Yang
Last updated on May 7, 2024
PDF
Cite
Code
Slides
Website
[AAAI'23] Dynamic Ensemble of Low-fidelity Experts: Mitigating NAS Cold-Start
To mitigate the cold-start problem of predictor-based NAS, we design an ensemble method to fuse the knowledge from multiple experts trained with low-fidelity architectural information (e.g., complexities, zero-shot metrics).
Junbo Zhao
,
Xuefei Ning
,
Enshu Liu
,
Binxin Ru
,
Zixuan Zhou
,
Tianchen Zhao
,
Chen Chen
,
Jiajin Zhang
,
Qingmin Liao
,
Yu Wang
Last updated on May 7, 2024
PDF
Cite
Code
Website
[AAAI'23] Ensemble-in-One: Ensemble Learning within Random Gated Networks for Enhanced Adversarial Robustness
We use the parameter-sharing technique to facilitate an efficient and scalable ensemble training technique for black-box adversarial robustness.
Yi Cai
,
Xuefei Ning
,
Huazhong Yang
,
Yu Wang
Last updated on May 7, 2024
Cite
Code
[AAAI'23] Memory-Oriented Structural Pruning for Efficient Image Restoration
IR models are extremely memory intensive and call for memory-oriented compression, we design a pruning flow to cut down the peak memory usage with special handling of the long-range skip connections (which incurs large peak memory overhead and is redundant for the task performance).
Xiangsheng Shi
,
Xuefei Ning
,
Lidong Guo
,
Tianchen Zhao
,
Enshu Liu
,
Yi Cai
,
Yuhan Dong
,
Huazhong Yang
,
Yu Wang
Last updated on May 7, 2024
Cite