When machine learning models are manipulated on purpose with carefully built input data, it is referred to as Adversarial Attacks in Generative AI. These attacks create faults in output by taking advantages of the vulnerable points in the AI model’s decision-making process.