dc.contributor.advisor | Manashty, Alireza | |
dc.contributor.author | Tandra, Sandeep Reddy | |
dc.date.accessioned | 2022-08-05T17:50:24Z | |
dc.date.available | 2022-08-05T17:50:24Z | |
dc.date.issued | 2022-01 | |
dc.identifier.uri | http://hdl.handle.net/10294/15033 | |
dc.description | A Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science, University of Regina. xi, 80 p. | en_US |
dc.description.abstract | Artificial Intelligence (AI) has widespread applications, including adding personalization
to the customer experience, or lifestyle recommendations that sometimes raise
concern regarding data privacy; especially in medical applications. Federated learning
is used to solve this problem of user privacy. Most of the federated models are
implemented by connecting billions of edge devices for privacy-preserved, on-device
training. However, edge devices have limited network resources, which causes hindrance
to effective communication of the machine learning models. Researchers have
used several techniques like quantization and asynchronous updates to reduce the
communication costs, but these techniques cause an improvement of just 10-15%.
Furthermore, most of the current personalized machine learning models are trained
by accessing the user information from a centralized machine learning server. This
research proposes two models that provide solutions to the above communication
cost and personalization problems in federated learning. The first proposed model is
Federated Learning with Improved Communication Cost (FLICC), which effectively
reduces the communication cost by around 40%. The model provides an efficient
communication method by transmitting around 60% of the model parameters in each
communication round. To prove that the model reduces the communication cost, we
conducted experiments on convolutional neural networks, multi-layer perceptron, and
long short-term memory networks using the MNIST, CIFAR-10 and NN5 datasets.
These experiments successfully prove the efficacy of the FLICC model as we achieve
competitive results compared to conventional federated learning models while reducing
the communication cost. The second proposed model is the Personalized
Federated Learning (PFL) Model, trained using privacy preserved machine learning
techniques and abides by data privacy laws like GDPR (General Data Protection
Regulation). In this model, the clients are clustered based on their similar metadata,
and the PFL model is sent for on-device training on those clusters to provide personalized
results to the users in the cluster. The proposed hypothesis has been proved
using the LSTM model on the MIMIC III dataset with series of medical records of
different patients. Experiments and results show that the PFL model provides better
results in terms of MSE, MAPE, and SMAPE evaluation metrics compared to
the conventional federated learning model. The results from both models posit they
can solve the problem of communication overhead and personalization in a federated
setting. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Faculty of Graduate Studies and Research, University of Regina | en_US |
dc.title | Personalized and Communication Cost Reduction Models in Federated Learning | en_US |
dc.type | Thesis | en_US |
dc.description.authorstatus | Student | en |
dc.description.peerreview | yes | en |
thesis.degree.name | Master of Science (MSc) | en_US |
thesis.degree.level | Master's | en |
thesis.degree.discipline | Computer Science | en_US |
thesis.degree.grantor | University of Regina | en |
thesis.degree.department | Department of Computer Science | en_US |
dc.contributor.committeemember | Yao, JingTao | |
dc.contributor.externalexaminer | Bais, Abdul | |
dc.identifier.tcnumber | TC-SRU-15033 | |
dc.identifier.thesisurl | https://ourspace.uregina.ca/bitstream/handle/10294/15033/Tandra_Sandeep_MSC_CS_Spring2022.pdf | |