news-10072024-223111

Amazon Web Services (AWS) recently announced a significant update to its guardrails feature, making it more accessible and efficient for customers. The new standalone API, called Guardrails API, allows users to set parameters for common guardrails, such as hate speech and sexualized terms, in any AI model or application, even outside of Amazon Bedrock.

This update was unveiled at AWS’s New York Summit, where Vasi Philomin, vice president of generative AI at AWS, emphasized the importance of safety, privacy, and truthfulness in AI applications. The Guardrails API aims to provide customers with greater flexibility and control over their AI models, regardless of the organization or type of model they are using.

In addition to the standalone API, AWS introduced a new feature called Contextual Grounding, which helps detect and filter hallucinations in AI models. This feature is designed to ensure that responses from AI agents are based on actual data and not fabricated information. By implementing Contextual Grounding, AWS estimates that it can detect and filter more than 75% of hallucinations, providing users with more reliable and accurate responses.

Furthermore, AWS also announced enhancements to AI agents built on Bedrock, including improved memory retention and code interpretation capabilities. These updates allow AI agents to remember more information across interactions and analyze complex data with code, expanding their capabilities and customization options for users.

Overall, these updates demonstrate AWS’s commitment to providing customers with advanced tools and capabilities for building and deploying AI models. By offering a standalone guardrails API, implementing features like Contextual Grounding, and enhancing AI agent capabilities, AWS aims to empower customers to leverage generative AI more effectively in their businesses and industries.