FGM (Fast Gradient Method) is a prominent adversarial attack technique that generates small, input-dependent perturbations to mislead machine learning models. It works by adding a perturbation proportional to the sign of the loss gradient with respect to the input, aiming to maximize the model's prediction error.
FGM is a method to create subtle, targeted changes to data inputs that trick AI models into making mistakes. It's used to test how robust AI systems are, revealing that even advanced models can fail significantly when faced with these 'adversarial' inputs, despite performing well normally.
Fast Gradient Sign Method, FGSM
Was this definition helpful?