THESIS
2023
Abstract
Federated Learning (FL) has emerged as a pioneering paradigm enabling multiple clients to collaboratively train a shared model while preserving the privacy of their sensitive raw data. Despite the tremendous success achieved by FL, its implementation faces several critical challenges. This thesis focuses on investigating two prominent challenges encountered by FL systems: The substantial communication overhead and system heterogeneity. The first challenge pertains to the substantial communication overhead inherent in FL, which hinders scalability and efficiency. To address this communication-intensive training issue, we propose a novel training method called Federated Learning with Dual- Side Low-Rank Compression (FedDLR). By employing low-rank approximations to compress the deep learni...[
Read more ]
Federated Learning (FL) has emerged as a pioneering paradigm enabling multiple clients to collaboratively train a shared model while preserving the privacy of their sensitive raw data. Despite the tremendous success achieved by FL, its implementation faces several critical challenges. This thesis focuses on investigating two prominent challenges encountered by FL systems: The substantial communication overhead and system heterogeneity. The first challenge pertains to the substantial communication overhead inherent in FL, which hinders scalability and efficiency. To address this communication-intensive training issue, we propose a novel training method called Federated Learning with Dual- Side Low-Rank Compression (FedDLR). By employing low-rank approximations to compress the deep learning model at both the server and client sides, FedDLR significantly reduces communication requirements while maintaining model performance and accuracy. The second challenge revolves around the heterogeneity present in FL systems, encompassing variations in client devices, network conditions, and data distributions. To tackle this challenge, we explore the client selection problem within FL systems. Through a comprehensive theoretical convergence analysis, we investigate the influence of client selection strategies. Building upon this analysis, we formulate the client selection problem as a combinatorial optimization problem and propose a novel client selection method named Content-Aware Client Selection (CACS). The CACS strategy intelligently selects a subset of clients while accounting for the heterogeneity among them, which in turn remarkably improves the learning efficiency. A comprehensive theoretical analysis along with extensive experiments will demonstrate the effectiveness and superiority of our proposed methods.
Post a Comment