news-01072024-184106

Anthropic, a company known for its generative model Claude, has announced a new program aimed at funding the development of innovative benchmarks to evaluate the performance and impact of AI models. The company plans to provide grants to third-party organizations that can create benchmarks capable of measuring advanced capabilities in AI models. Interested parties can submit their applications for evaluation on an ongoing basis.

According to Anthropic, the goal of this initiative is to improve the field of AI safety by providing valuable tools that benefit the entire ecosystem. The company acknowledges that developing high-quality evaluations for AI models is a challenging task, and the demand for such benchmarks is growing rapidly.

One of the key issues in the AI industry today is the lack of benchmarks that accurately reflect how the average person interacts with AI systems. Many existing benchmarks are outdated and may not effectively measure the capabilities of modern generative AI models like Claude. To address this problem, Anthropic is calling for the creation of new benchmarks that focus on AI security and societal implications.

Specifically, Anthropic is looking for benchmarks that can assess an AI model’s ability to carry out tasks such as cyberattacks, enhancing weapons of mass destruction, and manipulating or deceiving people through deepfakes or misinformation. The company also aims to support research into benchmarks that evaluate AI’s potential in scientific research, multilingual conversations, bias mitigation, and toxicity filtering.

To achieve these goals, Anthropic envisions new platforms that allow experts to develop their own evaluations and conduct large-scale trials involving thousands of users. The company has hired a full-time coordinator for the program and is open to funding a variety of projects at different stages of development. Teams selected for funding will have the opportunity to work closely with Anthropic’s domain experts.

While Anthropic’s efforts to support new AI benchmarks are commendable, some in the AI community may have concerns about the company’s commercial interests. Anthropic is transparent about its desire for evaluations to align with its AI safety classifications, which could potentially limit the diversity of perspectives in the program. Additionally, some experts question the company’s focus on catastrophic and deceptive AI risks, as they believe the current state of AI technology does not pose immediate threats of superintelligence.

Despite these concerns, Anthropic hopes that its program will drive progress towards establishing comprehensive AI evaluation standards across the industry. The company’s mission to improve AI benchmarks aligns with other independent efforts in the field, but the extent to which these initiatives will collaborate with a commercial entity like Anthropic remains to be seen.