Adversarial attacks and defences in Federated Learning

  1. Rodríguez Barroso, Nuria
Supervised by:
  1. Francisco Herrera Triguero Director

Defence university: Universidad de Granada

Fecha de defensa: 01 December 2023

Committee:
  1. Óscar Cordón García Chair
  2. Rocío C. Romero Zaliz Secretary
  3. María José del Jesús Díaz Committee member
  4. Pietro Ducange Committee member
  5. Senén Barro Committee member

Type: Thesis

Abstract

Artificial Intelligence (AI) is currently in the process of revolutionising numerous facets of everyday life. Nevertheless, as its development progresses, the associated risks are on the rise. Despite the fact that its full potential remains uncertain, there is a growing apprehension regarding its deployment in sensitive domains such as education, culture, and medicine. Presently, one of the foremost challenges confronting us is finding a harmonious equilibrium between the prospective advantages and the attendant risks, thereby preventing precaution from impeding innovation. This necessitates the development of AI systems that are robust, secure, transparent, fair, respectful to privacy and autonomy, have clear traceability, and are subject to fair accountability for auditing. In essence, it entails ensuring their ethical and responsible application, giving rise to the concept of trustworthy AI. In this context, Federated Learning (FL) emerges as a paradigm of distributed learning that ensures the privacy of training data while also harnessing global knowledge. Although its ultimate objective is data privacy, it also brings forth other cross-cutting enhancements such as robustness and communication cost minimisation. However, like any learning paradigm, FL is susceptible to adversarial attacks aimed at altering the model’s operation or inferring private information. The central focus of this thesis is the development of defence mechanisms against adversarial attacks that compromise the model’s behaviour while concurrently promoting other requirements to ensure trustworthy AI.