The National Institute of Standards and Technology (NIST) is taking a significant step in ensuring the safety and security of artificial intelligence (AI) models by re-releasing a tool called Dioptra. This tool is designed to test the susceptibility of AI models to malicious data, also known as “poisoned” data. The re-release of Dioptra comes in response to President Biden’s Executive Order on the development of AI, which mandated NIST to assist with model testing.
The primary goal of Dioptra is to assist small to medium-sized businesses and government agencies in determining potential attacks that could compromise the effectiveness of their AI models. By using this tool, users can quantify the reduction in performance caused by specific attacks and identify the conditions that lead to model failure.
It is crucial for organizations to take proactive measures to safeguard their AI programs from potential threats. NIST emphasizes the importance of ensuring that AI models are not influenced by malicious data, as this could have catastrophic consequences. Director Laurie E. Locascio highlights that AI technology poses unique risks compared to other software, underscoring the necessity for guidance documents and testing platforms to address these risks and promote innovation.
Dioptra offers the capability to test various combinations of attacks, defenses, and model architectures to determine the most significant threats and effective solutions. While the tool does not eliminate all risks, it aims to reduce risk exposure while fostering innovation. Interested users can access Dioptra for free download, making it accessible to a wide range of organizations.
In addition to the release of Dioptra, NIST has launched a program to assist Americans in navigating AI technologies and avoiding synthetic content. This initiative aligns with the broader goal of promoting AI development for societal benefit and ensuring that AI is deployed safely and ethically.
By providing tools like Dioptra and educational programs, NIST is actively working towards enhancing the resilience of AI systems and enabling organizations to leverage AI technologies securely. As the use of AI continues to expand across various sectors, initiatives like these play a crucial role in mitigating risks and fostering responsible AI innovation.