Application Domains of Federated Learning in Healthcare 5.0

Application Domains of Federated Learning in Healthcare 5.0

T. Ananth Kumar, A. Gokulalakshmi, P. Kanimozhi, G. Glorindal
Copyright: © 2024 |Pages: 18
DOI: 10.4018/979-8-3693-1082-3.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Federated learning has emerged as a game-changing approach in machine learning, allowing high-quality centralised models to be trained across a network of decentralised clients. Federated Learning is defined by the collaborative learning process that involves a large number of customers, each of whom contributes insights from their localised datasets. This collaborative approach is critical in cases where data privacy and network constraints are critical. This research focuses on the unique learning algorithms built for this situation. Individual clients autonomously compute model changes based on their local data at each iteration, then communicate these modifications to a central server. These client-side updates are subsequently aggregated by the central server, resulting in the construction of an updated global model. The challenge in this situation is to train models efficiently while dealing with clients who have inconsistent and slow network connections.
Chapter Preview
Top

1. Introduction

In the ever-changing world of machine learning, there is a drive to successfully use vast and distributed data resources while prioritising privacy and efficiency. This pursuit has given rise to novel paradigms, with one approach in particular gaining prominence: “Federated Learning.” This ground-breaking innovation challenges the usual practise of centralised data collection and training. It deals with challenges such as data privacy, security, and communication constraints. To achieve efficient model synchronisation, we investigated strategies that optimise communication frequency, data compression, and network-aware updates (Tyagi et al., 2023). Furthermore, we investigate the compromises that accompany federated learning, such as balancing model accuracy, integration speed, and expenses related to communication. We conduct simulations to evaluate the influence of communication restrictions on the training process, offering light on techniques to avoid convergence slowdowns. This study adds to the advancement of communication-efficient Federated Learning by tackling communication bottlenecks, paving the way for robust and practical machine learning in resource-constrained context (Abdellatif et al., 2022). Federated Learning operates at the confluence of machine learning advancement and data privacy concerns (Rani et al., 2023). Federated Learning deviates from the norm by instituting a decentralised model training method. Diverse devices, ranging from smartphones to IoT sensors, use their unique data to train localised models, all without exposing sensitive information to a central server. Instead, the emphasis is on communicating only model updates packaged as gradients for aggregation. This protection of data privacy is the foundation of the Federated Learning framework, which shows promise as a solution to the ethical quandaries involved with data sharing (Sirohi et al., 2023). Federated Learning is fundamentally an innovative concept that encompasses the values of decentralised learning, collaborative intelligence, and data sovereignty. It helps the building and improvement of models within the domain of machine learning by using data scattered across multiple devices, each with its own particular data perspective. Federated Learning's decentralised nature complements the proliferation of devices that characterises the Internet of Things (IoT) domain. In this context, a diverse range of devices from smartphones and wearables to sensors and industrial equipment contribute to the data ecosystem's enrichment (Jiang et al., 2023). Federated Learning algorithms are built on a novel conceptual structure that departs from the usual concept of centralising data and computation. Instead of storing data on a server for training reasons, Federated Learning enables a cooperative learning technique that takes place immediately on the devices that generate the data. This novel approach stems from the realisation that data privacy, security, and ownership have become crucial in the age of digital growth. “The dissemination of a foundational model to individual devices is the first step in Federated Learning. These devices, termed as clients, then participate in localised model training using their respective data. Importantly, no unprocessed data is transmitted outside of these devices, ensuring the security of critical information. Instead, only model enhancements, encoded as gradients, are delivered to a central server for aggregation. Federated Learning connects data ownership and model improvement, cultivating a cooperative environment in which individual devices contribute to the learning endeavour collaboratively (Gabrielli, Pica, & Tolomei, 2023). This combination of data independence and collective intelligence expands the capabilities of machine learning applications, allowing for the development of more precise and diverse models while adhering to the principles of responsible data management. In summary, the concept of Federated Learning goes beyond the traditional boundaries of centralising data aggregation, encompassing a viewpoint in which data privacy, collaboration, and efficient communication seamlessly coexist. As the complexities of Federated Learning, we discover the potential to revolutionise sectors, expand the capabilities of AI systems, and foster a future where data accelerates innovation while retaining an unshakable commitment to individual privacy and security.

Complete Chapter List

Search this Book:
Reset