Exploring personalization via federated representation Learning on non-IID data
Department
Software Engineering and Game Development
Document Type
Article
Publication Date
6-1-2023
Abstract
Federated Learning (FL) can learn a global model across decentralized data over different clients. However, it is susceptible to statistical heterogeneity of client-specific data. Clients focus on optimizing for their individual target distributions, which would yield divergence of the global model due to inconsistent data distributions. Moreover, federated learning approaches adhere to the scheme of collaboratively learning representations and classifiers, further exacerbating such inconsistency and resulting in imbalanced features and biased classifiers. Hence, in this paper, we propose an independent two-stage personalized FL framework, i.e., Fed-RepPer, to separate representation learning from classification in federated learning. First, the client-side feature representation models are learned using supervised contrastive loss, which enables local objectives consistently, i.e., learning robust representations on distinct data distributions. Local representation models are aggregated into the common global representation model. Then, in the second stage, personalization is studied by learning different classifiers for each client based on the global representation model. The proposed two-stage learning scheme is examined in lightweight edge computing that involves devices with constrained computation resources. Experiments on various datasets (CIFAR-10/100, CINIC-10) and heterogeneous data setups show that Fed-RepPer outperforms alternatives by utilizing flexibility and personalization on non-IID data.
Journal Title
Neural Networks
Journal ISSN
08936080
Volume
163
First Page
354
Last Page
366
Digital Object Identifier (DOI)
10.1016/j.neunet.2023.04.007