Anthropic Launches the Anthropic Institute to Study AI's Societal Risks
Anthropic announced the Anthropic Institute on Wednesday, a new internal think tank that consolidates three of the company's existing research teams into a single body focused on AI's large-scale societal implications.
What It Studies
The Institute's research agenda covers questions beyond model performance: what happens to jobs and economies as AI scales, whether AI makes the world safer or introduces new risks, how AI values might shape human values, and whether society can retain meaningful control. Its founding team of roughly 30 includes Matt Botvinick (formerly of Google DeepMind), economist Anton Korinek (on leave from the University of Virginia), and researcher Zoe Hitzig, who left OpenAI following its decision to introduce ads within ChatGPT.
C-Suite Shift
Anthropic co-founder Jack Clark, who spent more than five years as head of public policy, moves into a new role as head of public benefit to lead the Institute. The public policy function — which tripled in headcount in 2025 — passes to Sarah Heck, previously head of external affairs. Anthropic also confirmed it will open its planned Washington, D.C. office, with policy work continuing to focus on national security, AI infrastructure, energy, and democratic governance of AI.
Context: The Pentagon Fight
The announcement lands days after Anthropic filed suit against the U.S. Department of Defense, which designated the company a national security supply chain risk on March 3. The designation followed Anthropic's refusal to lift its restrictions preventing Claude from being used for autonomous lethal weapons or mass domestic surveillance. Clark framed the Institute's timing as both planned and affirmed by recent events. "What we're experiencing with the last few weeks just sort of shows you how much hunger there is for a larger national conversation by the public about this technology," he told The Verge.