loading page

Semantic-Aware Federated Blockage Prediction (SFBP) in Vision-Aided Next-Generation Wireless Network
  • +4
  • Ahsan Raza Khan,
  • Habib Ullah Manzoor,
  • Rao Naveed Bin Rais,
  • Sajjad Hussain,
  • Lina Mohjazi,
  • Muhammad Ali Imran,
  • Ahmed Zoha
Ahsan Raza Khan
James Watt School of Engineering, University of Glasgow

Corresponding Author:[email protected]

Author Profile
Habib Ullah Manzoor
James Watt School of Engineering, University of Glasgow
Rao Naveed Bin Rais
Artificial Intelligence Research Centre (AIRC), Ajman University
Sajjad Hussain
James Watt School of Engineering, University of Glasgow
Lina Mohjazi
James Watt School of Engineering, University of Glasgow
Muhammad Ali Imran
Artificial Intelligence Research Centre (AIRC), Ajman University, James Watt School of Engineering, University of Glasgow
Ahmed Zoha
James Watt School of Engineering, University of Glasgow

Abstract

Predicting signal blockages in millimetre waves (mmWave) and terahertz (THz) networks is a challenging task that requires anticipating environmental changes. One promising solution is to use multi-modal data, such as vision and wireless inputs, and deep learning. However, combining these data sources can lead to higher communication costs, inefficient bandwidth usage, and undesirable latency, making it challenging. This paper proposes a semantic aware federated blockage prediction (SFBP) framework for a vision-aided next-generation wireless network. This framework uses computer vision techniques to extract semantic information from images and performs distributed on-device learning to enhance blockage prediction. Federated learning enables collaborative model training without exposing private data. Our proposed framework achieves 97.5% accuracy in predicting signal blockages, which is very close to the performance of centralised training. By using semantic information to train models, SFBP reduces communication costs by 88.75% and 57.87% compared to centralised learning and federated learning without semantics, respectively. On-device inference further reduces latency by 23% and 18% compared to centralised and federated learning without semantics, respectively.
07 Jan 2024Submitted to TechRxiv
10 Jan 2024Published in TechRxiv