Show simple item record

dc.contributor.advisorManashty, Alireza
dc.contributor.authorTandra, Sandeep Reddy
dc.date.accessioned2022-08-05T17:50:24Z
dc.date.available2022-08-05T17:50:24Z
dc.date.issued2022-01
dc.identifier.urihttp://hdl.handle.net/10294/15033
dc.descriptionA Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science, University of Regina. xi, 80 p.en_US
dc.description.abstractArtificial Intelligence (AI) has widespread applications, including adding personalization to the customer experience, or lifestyle recommendations that sometimes raise concern regarding data privacy; especially in medical applications. Federated learning is used to solve this problem of user privacy. Most of the federated models are implemented by connecting billions of edge devices for privacy-preserved, on-device training. However, edge devices have limited network resources, which causes hindrance to effective communication of the machine learning models. Researchers have used several techniques like quantization and asynchronous updates to reduce the communication costs, but these techniques cause an improvement of just 10-15%. Furthermore, most of the current personalized machine learning models are trained by accessing the user information from a centralized machine learning server. This research proposes two models that provide solutions to the above communication cost and personalization problems in federated learning. The first proposed model is Federated Learning with Improved Communication Cost (FLICC), which effectively reduces the communication cost by around 40%. The model provides an efficient communication method by transmitting around 60% of the model parameters in each communication round. To prove that the model reduces the communication cost, we conducted experiments on convolutional neural networks, multi-layer perceptron, and long short-term memory networks using the MNIST, CIFAR-10 and NN5 datasets. These experiments successfully prove the efficacy of the FLICC model as we achieve competitive results compared to conventional federated learning models while reducing the communication cost. The second proposed model is the Personalized Federated Learning (PFL) Model, trained using privacy preserved machine learning techniques and abides by data privacy laws like GDPR (General Data Protection Regulation). In this model, the clients are clustered based on their similar metadata, and the PFL model is sent for on-device training on those clusters to provide personalized results to the users in the cluster. The proposed hypothesis has been proved using the LSTM model on the MIMIC III dataset with series of medical records of different patients. Experiments and results show that the PFL model provides better results in terms of MSE, MAPE, and SMAPE evaluation metrics compared to the conventional federated learning model. The results from both models posit they can solve the problem of communication overhead and personalization in a federated setting.en_US
dc.language.isoenen_US
dc.publisherFaculty of Graduate Studies and Research, University of Reginaen_US
dc.titlePersonalized and Communication Cost Reduction Models in Federated Learningen_US
dc.typeThesisen_US
dc.description.authorstatusStudenten
dc.description.peerreviewyesen
thesis.degree.nameMaster of Science (MSc)en_US
thesis.degree.levelMaster'sen
thesis.degree.disciplineComputer Scienceen_US
thesis.degree.grantorUniversity of Reginaen
thesis.degree.departmentDepartment of Computer Scienceen_US
dc.contributor.committeememberYao, JingTao
dc.contributor.externalexaminerBais, Abdul
dc.identifier.tcnumberTC-SRU-15033
dc.identifier.thesisurlhttps://ourspace.uregina.ca/bitstream/handle/10294/15033/Tandra_Sandeep_MSC_CS_Spring2022.pdf


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record