Machine and federated learning for high-performance computing: a survey Online publication date: Mon, 07-Feb-2022
by Akshat Gaurav; Konstantinos E. Psannis
International Journal of High Performance Computing and Networking (IJHPCN), Vol. 17, No. 1, 2021
Abstract: A paradigm shift in machine learning (ML) application models has occurred in the preceding years due to privacy and deep learning aspirations. The recently created decentralised paradigm of ML is known as federated learning. Federated learning (FL) is a ML technique in which many dispersed nodes use their locally stored data to train a common prediction model. Better data privacy is possible because the training data is not routed to a central server. FL offloads the processed information to the server and does not need clients to share their personal information. However, FL is a new field that has yet to achieve mainstream acceptance and is still in the development phase. In this context, the purpose of our study is to provide a more comprehensive overview of the most important protocols, platforms, and real-world FL use cases, which will allow researchers to develop privacy-preserving solutions for businesses that require FL.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of High Performance Computing and Networking (IJHPCN):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com