The Center for a New American Security (CNAS) welcomes the opportunity to provide comments for the
2025 Artificial Intelligence (AI) Action Summit’s Expert Consultation hosted by the French Republic. CNAS
is an independent, bipartisan organization dedicated to developing bold, pragmatic, and principled national
security solutions based in Washington, DC. CNAS has several research initiatives focused on critical and
emerging technologies, including a center wide, multi-year initiative addressing the national security risks and
opportunities of artificial intelligence.
This document reflects the personal views of the authors alone. Due to the multi-author nature of this
consultation, authorship does not imply agreement with every point made in the document. As a research and
policy institution committed to the highest standards of organizational, intellectual, and personal integrity,
CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas,
projects, publications, events, and other research activities. CNAS does not take institutional positions on
policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with
its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable
federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of
any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities
will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable
federal law. The Center publicly acknowledges on its website annually all donors who contribute.
Authors:
Caleb Withers, Research Associate, Technology and National Security Program
Noah Greene, Research Assistant, AI Safety and Stability Project
Michael Depp, Research Associate, AI Safety and Stability Project
Janet Egan, Senior Fellow, Technology and National Security Program
General vision for the outcome of the 2025 French AI Action Summit: What outcomes would you like
to see from the AI Action Summit as priorities?
To date, the international community has discussed AI in varying venues, including the G7 Hiroshima AI
Process, the United Nations Group of Governmental Experts on Lethal Autonomous Weapons Systems
(UN GGE on LAWS), previous iterations of the AI Action Summit, U.S. efforts to formalize a multilateral
political declaration on the responsible use of AI, and the Summits on Responsible AI in the Military Domain
(REAIM). The 2025 French AI Action Summit is an opportunity to operationalize the political statements
agreed on in its predecessor AI summits. In this consultation, we identify several recommended outcomes to
improve AI trust and governance. First, further intergovernmental collaboration and investment to advance
model evaluations and align on thresholds for when such evaluations are necessary pre-deployment. Second,
greater consensus around definitions for training compute thresholds and their adoption by governments as a
useful tool for targeted government oversight. Compute thresholds have the virtue of narrowing which AI
models warrant further evaluation while reducing the regulatory burden for most AI developers. Third,
advancing the science around technical mechanisms for privacy-preserving verification of model and
compute workload characteristics, given its promise to support international governance. Fourth, we argue
that domain-specific international bodies should be leveraged where possible (for instance, the International
Civil Aviation Organization and World Health Organization within their respective remits), allowing
organizations such as the United Nations to focus their efforts on cross-cutting issues. Finally, avoiding the
trap of negotiating a single, consensus statement and instead focus on practical outcomes, such as establishing
an online exchange platform for international AI experts, mechanisms for sharing data and compute
infrastructure internationally, revising agreements from other multilateral fora such as the United Nations,
and a public list of undesired AI uses and international priorities. This summit provides an excellent