loading page

Self-Distillation for Fast-Adaptive Automatic Modulation Classification
  • +2
  • Yunhao Shi,
  • Hua Xu,
  • Zisen Qi,
  • Dan Wang,
  • Yue Zhang
Yunhao Shi
Air Force Engineering University
Author Profile
Hua Xu
Air Force Engineering University
Author Profile
Zisen Qi
Air Force Engineering University

Corresponding Author:[email protected]

Author Profile
Dan Wang
Air Force Engineering University
Author Profile
Yue Zhang
Air Force Engineering University
Author Profile

Abstract

Deep Learning models have ushered in leapfrog development in Automatic Modulation Classification (AMC). However, existing AMC models frequently fail to generalize well in testing data with different distribution. In this letter, we propose a simple and efficient baseline: incorporating self-distillation (SD) training strategy into an advanced backbone network. SD constructs a series of training tasks to continuously retrain on source dataset and generates a generalized backbone. Then the trained model is served as the foundation for finetuning on target dataset. The backbone network utilized multi-stream inputs and multi-scale convolutional kernels for increasing feature diversity. Besides, Convolutional Block Attention Module (CBAM) and Residual connection are also introduced into backbone network. Experiments conducted on RadioML2016.10a and RadioML2018.01a demonstrate that the superiority of the proposed backbone network and SD additional boost the generalization ability of the model.