Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers

To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes.

Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers

Below is the executive summary of our recently published paper "Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers." You can find the paper on GovAI's website.

Abstract

To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes. Compute –the computational power and infrastructure required to train and run these AI models – is emerging as a node for oversight. KYC, a standard developed by the banking sector to identify and verify client identity, could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls. Such a scheme has the potential to identify and warn stakeholders of potentially problematic and/or sudden advancements in AI capabilities, build government capacity for AI regulation, and allow for the development and implementation of more nuanced and targeted export controls. Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls, allowing regulatory control over compute quantities, as well as the flexibility to suspend access at any time. To enact a KYC scheme, the US government will need to work closely with industry to (1) establish a dynamic threshold of compute that effectively captures high-risk frontier model development, while minimizing imposition on developers not engaged in frontier AI; (2) set clear requirements and guidance for compute providers to keep records and report high-risk entities; (3) establish government capacity that allows for co-design, implementation, administration and enforcement of the scheme; and (4) engage internationally to promote international alignment with the scheme and support its long-term efficacy. While the scheme will not address all AI risks, it complements existing proposed solutions by allowing for a more precise and flexible approach to controlling the development of frontier AI models and unwanted AI proliferation.

Executive Summary

Emerging risks associated with the development of frontier AI models1 warrant additional regulatory intervention by the US government. The potential of these AI capabilities to enhance adversarial military capabilities and facilitate human rights abuses has led the US to introduce export controls that restrict exports of the specialized AI chips required to develop and deploy large AI models, among other restrictions. Yet gaps in these controls have emerged: there are currently no restrictions on entities accessing controlled chips and their associated computing power through digital means, such as cloud compute provision, offering a potential avenue for adversarial militaries and non-state actors of concern to benefit from US technology. While a blanket ban on cloud access could harm US technology leadership and would be difficult to enforce, there are clear security grounds for addressing these proliferation risks.

At the same time, broader risks to security and public safety are eliciting concern and a willingness to act from industry and government alike. Experts in industry and academia are warning of significant misuse risks, such as AI increasing the availability of biological weapon information and incentivizing malicious actors to attempt their development,2 as well as increasing risks of misinformation and electoral interference. US AI leaders have committed to following voluntary guidelines,3 but as noted by Senator Blumenthal, Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, ‘there will be regulation. The only question is how soon, and what.’4 As the home to leading players in the AI industry and supply chain, the US is uniquely positioned to shape regulatory approaches. Yet interventions will need to carefully balance maintaining US influence and industry power with providing avenues to effectively identify and mitigate critical risks.

Across these proliferation, safety, and security risks, compute – the computational power and infrastructure required to train and run these AI models – offers a key node for oversight and control. The quantity of compute required for frontier AI models has resulted in cloud compute forming a key part of the AI supply chain, with the US as the global leader in AI compute provision. Alongside other potential regulatory interventions,5 increasing oversight of AI compute could enable earlier identification of emerging risks and more targeted responses.

This paper recommends that the US government implement a Know-Your-Customer (KYC) scheme for AI compute providers, most notably Cloud Service Providers (CSPs), to enable greater oversight of the development of frontier AI models. Such a concept has already been proposed by Microsoft,6 as well as AI researchers,7 as a way of increasing accountability and managing risks. Implemented in partnership with industry, a KYC scheme has the potential to warn of significant advances in AI capability, build government capacity in AI regulation, and allow for more nuanced and targeted controls. This scheme could be accompanied by updated Export Administration Regulations that restrict the provision of above-threshold compute to companies on the Entity List. Beyond export controls, a KYC scheme could provide the groundwork for domestic safety regulations and support responsible AI development. The KYC scheme could be designed to leverage existing technical metrics and preserve privacy for compute providers and customers. This paper draws on lessons learned from the mature application of KYC in the financial sector to propose the development of a KYC scheme for compute providers. It recommends that the US government work with industry to:

1. Establish a threshold of compute for the scheme that effectively captures high-risk frontier model development,8 while minimizing imposition on developers not engaged in frontier AI. The threshold should be defined by the total amount of computational operations – a metric easily accessible to compute providers, as they employ chip-hours for client billing, convertible to total computational operations. Additionally, this threshold would need to be dynamic and subject to periodic reassessments by government, in close consultation with industry, to remain in step with developments in training efficiency as well as broader societal changes. It would also need to be supported by collaboration between compute providers, as well as with government, to minimize evasion risks.

2. Set clear requirements for compute providers, including requirements for gathering information, implementing fraud detection, keeping records, and reporting to government any entities that match government-specified ‘high-risk’ profiles. These requirements should be technically feasible, resilient against efforts to evade detection and enforceable, while preserving privacy.

3. Establish government capacity within the US Department of Commerce that allows for the codesign, implementation, administration, and enforcement of the scheme. This capacity should draw on existing expertise within the US government, as well as contribute to a deeper understanding of AI regulatory challenges to inform broader policies.

4. Engage with international partners to promote alignment with the scheme. While the US, as a significant global compute provider that wields substantial influence in the semiconductor supply chain, can exert broad influence through a domestically implemented scheme, cooperation with international partners will be a key enabler of increased oversight in the longer term. Consistent international standards will help ameliorate the risk of diminishing US AI leadership and will be essential to the long-term effectiveness of the scheme.

In support of this scheme, this paper makes several further recommendations to the US government, including engaging industry to co-design the scheme; developing more targeted controls for the cloud; publishing guidance on information sharing in the context of US antitrust laws to enable effective risk management by CSPs; and strong international advocacy and engagement to garner international buy-in and alignment.


You can read the paper here.


  1. Defined as ‘highly capable foundation models that could exhibit dangerous capabilities’, Anderljung et al., Frontier AI Regulation
  2. Zakrzewski, Lima, and DiMolfetta, “Tech leaders including Musk, Zuckerberg call for government action on AI.”
  3. Kang, “In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.”
  4. Oversight of A.I..
  5. Such as interventions at the model and applications levels, as proposed by Microsoft. Smith, Developing and Deploying AI Responsibly: Elements of an Effective Legislative Framework to Regulate AI.
  6. Smith, How do we best govern AI?.
  7. Fist, Heim, and Schneider, Chinese Firms Are Evading Chip Controls
  8. In this paper, we are primarily addressing the governance of AI systems development. Questions about potential large-scale deployment, i.e. inference compute, are outside the scope of this particular discussion. However, the oversight of deployment compute might also be a future policy tool for regulating AI deployment (Appendix A in O’Brien, Ee, and Williams, Deployment Corrections: An Incident Response Framework for Frontier AI Models; Brundage et al., forthcoming).