Techniques used to manipulate or deceive machine learning systems by feeding crafted inputs, poisoning training data, or exploiting model weaknesses to cause incorrect or unsafe decisions.