news-05072024-060530

Singapore is working on releasing guidelines to help make artificial intelligence (AI) systems more secure. The Cyber Security Agency (CSA) will soon publish its draft Technical Guidelines for Securing AI Systems for the public to review and provide feedback. These guidelines are voluntary and can be used by organizations alongside their existing security measures to address potential risks in AI systems.

During the Association of Information Security Professionals (AiSP) AI security summit, Singapore’s senior minister of state for Ministry of Communications and Information, Janil Puthucheary, emphasized the importance of enhancing the security of AI tools. He highlighted the rapid proliferation of AI in various sectors and the evolving threat landscape that comes with it. For example, attackers can compromise AI models using techniques like adversarial machine learning, as seen in the case of Mobileye being manipulated by changing speed limit signs.

Puthucheary urged the industry and community to play their part in keeping AI systems safe from malicious threats. He mentioned that Singapore’s Government Technology Agency (GovTech) is developing capabilities to simulate potential attacks on AI systems to better understand and address security vulnerabilities. This proactive approach will help in implementing the necessary safeguards to protect against existing and emerging threats targeting AI systems.

While AI presents new security risks, it also offers opportunities to enhance cyber defense. Security tools powered by machine learning can help in detecting anomalies and responding to potential threats more effectively. AI can be leveraged to identify risks faster, at scale, and with greater precision, empowering security professionals to strengthen their defenses against cyber threats.

In addition to Singapore’s efforts, the US National Security Agency’s AI Security Center has released guidelines on Deploying AI Systems Securely. These best practices aim to improve the integrity and availability of AI systems, as well as create mitigations for known vulnerabilities. The guidelines provide methodologies and controls to detect and respond to malicious activities targeting AI systems and their data.

As the adoption of AI continues to grow, it is crucial for organizations to stay vigilant and proactive in securing their AI systems. Collaboration between industry stakeholders, cybersecurity professionals, and government agencies will be essential in addressing the evolving threat landscape and ensuring the safety and security of AI technologies. By staying informed and implementing best practices, organizations can effectively mitigate risks and protect their AI systems from potential cyber threats.