Entropy-based Adaptive Gradient Quantization in Federated Learning for Internet of Vehicles
DOI:
https://doi.org/10.54097/babjfm42Keywords:
Internet of Vehicles, Federated Learning, Gradient QuantizationAbstract
Federated learning for internet of vehicles builds an intelligent transportation system with real-time responsiveness and intelligent collaborative training of high-quality models by integrating traffic data between vehicle nodes, roadside units, and infrastructure. As the internet of vehicles architecture continues to expand, frequent gradient data interactions between roadside units and vehicle nodes lead to increased uplink channel load and communication delay in federated learning systems. To alleviate the communication delay problem, existing works propose gradient quantization algorithms to reduce the communication bandwidth overhead by reducing the transmission of redundant data. However, the existing gradient quantization algorithms' undifferentiated discarding of gradient data leads to a reduction in the accuracy of the aggregation model. To balance model accuracy and communication overhead, we propose an entropy-based adaptive gradient quantization for federated learning (eaqfed). The eaqfed dynamically adjusts the gradient quantization level through the entropy property during model updating to maintain model accuracy while reducing communication cost.
Downloads
References
[1] Duan W, Gu J, Wen M, et al. Emerging technologies for 5G-IoV networks: applications, trends and opportunities. IEEE Network. 2020, Vol. 34 (No. 5), p. 283-289.
[2] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. Artificial intelligence and statistics. 2017, Vol. 54 (No. 20), p. 1273-1282.
[3] Alistarh D, Grubic D, Li J, et al. QSGD: Communication-efficient SGD via gradient quantization and encoding. Advances in neural information processing systems. Long Beach, 2017.
[4] Bernstein J, Wang Y X, Azizzadenesheli K, et al. signSGD: Compressed optimisation for non-convex problems. International Conference on Machine Learning. Stockholm, 2018.
[5] Wen W, Xu C, Yan F, et al. Terngrad: Ternary gradients to reduce communication in distributed deep learning. Advances in neural information processing systems. Long Beach, 2017.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Computer Science and Artificial Intelligence

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.