Personalized and Communication Cost Reduction Models in Federated Learning
Artificial Intelligence (AI) has widespread applications, including adding personalization to the customer experience, or lifestyle recommendations that sometimes raise concern regarding data privacy; especially in medical applications. Federated learning is used to solve this problem of user privacy. Most of the federated models are implemented by connecting billions of edge devices for privacy-preserved, on-device training. However, edge devices have limited network resources, which causes hindrance to effective communication of the machine learning models. Researchers have used several techniques like quantization and asynchronous updates to reduce the communication costs, but these techniques cause an improvement of just 10-15%. Furthermore, most of the current personalized machine learning models are trained by accessing the user information from a centralized machine learning server. This research proposes two models that provide solutions to the above communication cost and personalization problems in federated learning. The first proposed model is Federated Learning with Improved Communication Cost (FLICC), which effectively reduces the communication cost by around 40%. The model provides an efficient communication method by transmitting around 60% of the model parameters in each communication round. To prove that the model reduces the communication cost, we conducted experiments on convolutional neural networks, multi-layer perceptron, and long short-term memory networks using the MNIST, CIFAR-10 and NN5 datasets. These experiments successfully prove the efficacy of the FLICC model as we achieve competitive results compared to conventional federated learning models while reducing the communication cost. The second proposed model is the Personalized Federated Learning (PFL) Model, trained using privacy preserved machine learning techniques and abides by data privacy laws like GDPR (General Data Protection Regulation). In this model, the clients are clustered based on their similar metadata, and the PFL model is sent for on-device training on those clusters to provide personalized results to the users in the cluster. The proposed hypothesis has been proved using the LSTM model on the MIMIC III dataset with series of medical records of different patients. Experiments and results show that the PFL model provides better results in terms of MSE, MAPE, and SMAPE evaluation metrics compared to the conventional federated learning model. The results from both models posit they can solve the problem of communication overhead and personalization in a federated setting.