The phenomenon of “data poisoning” poses a significant threat to Artificial Intelligence (AI) systems. This issue, although not new, has gained renewed attention due to the proliferation of Big Data and the evolution of AI technologies. Data poisoning involves manipulating or injecting tampered information into datasets used to train Machine Learning (ML) models. Such attacks can drastically reduce the reliability of these models or even allow attackers to introduce backdoors, enabling them to manipulate the models as they wish.
AI and ML systems, particularly in the energy sector, are susceptible to data poisoning. For instance, attackers can compromise the training cycles of autonomous vehicles, causing them to misinterpret traffic signs. The danger becomes even more palpable when considering the potential poisoning of data in healthcare applications, leading to incorrect diagnoses and straining the healthcare system.
To mitigate these risks, it’s essential to understand the nature of data poisoning and its consequences. By examining data sets using previously employed ML models and comparing them to current production models, discrepancies can indicate potential data alterations. Policies that limit the amount of data a single user can provide can also be beneficial, as attackers often inject large quantities of data into datasets. Enhancing access controls and strengthening identification policies for both clients and servers, including cloud services, are crucial.
In 2021, Hyrum Anderson from Microsoft demonstrated how one could extract information from an ML model without being detected by defense systems. This presentation provides valuable insights into the potential risks businesses face and suggests directions for averting danger.
Defensive techniques should include reducing the attack surface with firewalls, promptly applying security patches, monitoring network traffic, and having a robust incident response plan. Physical security is equally important, as data poisoning can occur within company premises. Cleaning poisoned data is challenging and can render AI training activities futile.
Source: Cybersecurity360
To mitigate potential threats, it is important to implement additional cybersecurity measures with the help of a trusted partner like INFRA www.infrascan.net or you can try your self using check.website.