: Attackers can use bi-level optimization to find the exact "poison" samples that mislead systems into selecting the wrong features, which is devastating for wireless distributed learning.

Adversarial robustness is the ability of a model to resist being fooled by "adversarial examples"—carefully crafted inputs that appear normal to humans but cause ML models to make catastrophic errors. A slight, imperceptible perturbation to a signal can flip a 91% confident "pig" classification to a 99% confident "airliner".

Recent studies highlight that foundational signal processing tasks are surprisingly vulnerable to data poisoning and feature modification:

: Subspace learning algorithms can be deluded under specific energy constraints, compromising array signal processing.