Anthropic, Google, Microsoft, and OpenAI have formed the Frontier Model Forum, a new consortium focused on advancing safe and responsible AI.

The Forum aims to benefit the AI community by developing best practices, evaluating capabilities and risks, and enabling research access. Key goals include:

  • Promoting AI safety research to minimize risks of large models.
  • Establishing industry standards and benchmarks for evaluating frontier models.
  • Advising policymakers and collaborating with stakeholders on responsible AI.
  • Supporting applications that address societal challenges like climate change.

Membership is open to organizations developing advanced AI models, with a commitment to safety and contributing to the Forum’s initiatives.

Over the next year, the Forum will focus on:

  • Identifying best practices for AI safety and risk mitigation.
  • Advancing AI safety research by coordinating efforts on key questions.
  • Facilitating information sharing between companies and governments on AI risks.

By pooling expertise, the Frontier Model Forum seeks to ensure AI’s transformative potential benefits humanity safely and responsibly. This consortium represents an important step toward realizing that vision.