Privacy-Preserving Federated Machine Learning Techniques

Privacy-Preserving Federated Machine Learning Techniques

Copyright: © 2023 |Pages: 24
DOI: 10.4018/979-8-3693-0593-5.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Machine learning is increasingly used for data analysis, but centralized datasets raise concerns about data privacy and security. Federated learning, a distributed method, enables multiple entities to cooperatively train a machine learning model. Clients use their local datasets to train local models, while a central aggregator aggregates updates and computes a global model. Privacy-preserving federated learning (PPFL) addresses privacy issues in sensitive and decentralized data situations. PPFL integrates federated learning with privacy-preserving approaches to achieve both privacy and model correctness.
Chapter Preview
Top

1. Introduction

1.1. Information Security in Machine Learning

The growing privacy concerns in machine learning applications are a reflection of how much AI and data-driven technologies have impacted our lives. The sensitive nature of the data being processed is a rising source for concern as machine learning algorithms become indispensable to many facets of society, from individualized suggestions to healthcare diagnoses (Truong, 2021). People are understandably concerned about the possible misuse or improper treatment of their personal information. Public trust has been eroded by high-profile data breaches and scandals that have brought to light the true dangers of data privacy infractions.

Data privacy issues are heightened in the context of machine learning since models frequently need access to large and varied datasets. Questions concerning data ownership, permission, and the possibility of prejudice and discrimination are raised by this. The difficulty is in using machine learning effectively while upholding the autonomy and rights of each person (Tan et al., 2022). In the end, the growing privacy concerns highlight the necessity of responsible and moral AI development. It demands openness, responsibility, and the inclusion of privacy-protecting methods in machine learning procedures. Only by solving these issues can make sure that machine learning advantages are achieved while maintaining the security and privacy of people and their data.

1.2. Risks of Revealing Private Information for Model Training

There are various inherent dangers when sharing sensitive data for model training, thus they should be carefully considered. The possibility of data breaches or unauthorized access is one of the main worries (Yin, 2021). There is a higher risk of cyber assaults and data leaks when sensitive information, such as personal identifiers, medical records, or financial details, are exchanged. These hacks have the potential to seriously hurt both people and companies by exposing highly personal data, facilitating identity theft, fraud, and other crimes.

The potential of privacy infringement is another. Sharing sensitive information without strong privacy safeguards may violate people's right to privacy control. When users or data subjects have not given their explicit agreement to the sharing or use of their data for model training, this can lead to a breach of trust.

Additionally, when sensitive data is involved, the danger of prejudice and discrimination is increased. Unfair or biased outcomes may result from models trained on such data in a variety of applications, including lending, hiring, and criminal justice.

Complete Chapter List

Search this Book:
Reset