Connect with us

Infra

US sets reporting requirements for AI models, hardware

Published

on

US sets reporting requirements for AI models, hardware

The US Commerce Department has proposed a fresh set of reporting requirements for developers of cutting-edge AI models and those renting the infrastructure required to train them.

The rules [PDF], published on Monday, are a response to the Biden administration’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” enacted last northern autumn.

The executive order established interim reporting requirements for those developing large AI compute clusters and/or training frontier models. The order also directed the Commerce Department to define and maintain permanent reporting standards.

As we reported at the time, the interim limits targeted only the biggest models and compute clusters.

The updated rules mandate reporting of models that require more than 1026 integer or floating point operations to train. Models trained primarily on biological sequence data are subject to a lower threshold of 1023 operations.

Entities developing such projects will also be required to disclose the capabilities of their models, their infosec protections, and any outcomes of red-teaming efforts to ensure that AI technologies “meet stringent safety standards for safety, and reliability, can withstand cyber attacks, and have limited risk of misuse by foreign adversaries or non-state actors,” according the announcement of the proposed rules.

Chief among the department’s concerns is that sufficiently advanced models could be used to facilitate cyber crime – or lower the barrier to developing biological, chemical, nuclear weapons and dirty bombs – if actions aren’t taken to test, identify, and mitigate these threats.

The rules also require infrastructure operators to report if their compute clusters exceed 300Gbit/sec networking capacity, have a theoretical peak performance greater than 1020 integer or floating point operations per second for AI training – or 100 exaFLOPS without sparsity.

As a reminder, that’s the equivalent to a cluster of 50,530 H100 GPUs connected via 400Gbit/sec InfiniBand, assuming FP8 precision, or a cluster of 101,060 such accelerators at the 16-bit precision more commonly employed in AI training.

While the threshold for reporting compute capacity hasn’t changed, the proposed rules increase the interconnect bandwidth from 100Gbit/sec to 300Gbit/sec. The rules also clarify that we’re talking about dense compute capability versus the sparse floating point mathematics often touted by Nvidia and its rivals.

Under these rules, those operating clusters already exceeding this threshold, or expecting to in the next month, will be required to report the scope of their operations on a quarterly basis.

While those numbers may have seemed enormous a year ago, the scale and pace of AI innovation has accelerated considerably. Some hyperscalers, like Meta, are deploying hundreds of thousands of GPUs. However, we expect the list of infrastructure providers subject to the rules to be rather short.

“As AI is progressing rapidly, it holds both tremendous promise and risk. This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” commerce secretary Gina Raimondo declared in a canned statement.

If approved following a 30-day comment period, the proposed rules will be codified in the Bureau of Industry and Security’s industrial base surveys data-collections regulations.

Dual-use applications of AI – those with the potential for both peaceful and non-peaceful use cases – have been on Uncle Sam’s radar for some time now.

Back in January, the Commerce Department proposed rules that would require certain infrastructure-as-a-service providers to tattle on any foreign person using their services to train large AI models likely to be capable of dual-use applications.

While not named in the proposal, the implied target of these measures is China – which is known to be circumventing US trade restrictions on AI accelerators by running workloads on infrastructure rented inside the US.

Today’s proposal also comes a little under a week after the department tightened controls on quantum computing and semiconductor exports to China, Iran, Russia, and other nations of concern. ®

Continue Reading