Responses to the White House’s Request for Information

• “Companies have experienced the challenges of dealing with a fragmented and increasingly complex regulatory landscape due to the patchwork of state data privacy laws, which hinders innovation and the ability to provide consumer services. Federal AI legislation with strong preemption should provide protection for consumers and certainty for businesses developing and deploying AI.”
• “The administration should preserve but refocus the AI Safety Institute (AISI) to ensure the federal government provides the foundational standards that inform AI governance. While AISI, housed at NIST, does not set laws, it plays a critical role in developing safety standards and working with international partners—functions that are essential for maintaining a coherent federal approach. Without this, AI governance will continue to lack a structured federal foundation, leaving states to introduce their own regulations in response to AI risks without clear federal guidance. This risks creating a fragmented regulatory landscape where businesses must comply with conflicting requirements, and policymakers struggle to craft effective, evidence-based laws.”
• “The U.S. government should support the release of open-source AI models, datasets, and tools that can be used to fuel U.S. AI development, innovation, and economic growth. Open-source models and tools enable greater participation in the AI domain, allowing lower-resource organizations that cannot develop base models themselves to access, experiment, and build upon them.”
• “The Administration should ensure that the U.S. avoids a fragmented regulatory environment that would slow the development of AI, including by supporting federal preemption of state-level laws that affect frontier AI models. Such action is properly a federal prerogative and would ensure a unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive. Similarly, the Administration should support a national approach to privacy, as state-level fragmentation is creating compliance uncertainties for companies and can slow innovation in AI and other sectors.”
• “We believe that computing power will become an increasingly significant driver of economic growth. Accordingly, the White House should track the relationship between investments in AI computational resources and economic performance to inform strategic investments in domestic infrastructure and related supply chains.”
• “The White House should engage with Congress on and task relevant agencies with examining how AI adoption might reshape the composition of the national tax base, and ensure the government maintains visibility into potential structural economic shifts.”
• The Trump administration should also “expedite appointments, vetting, and processing for visa applicants with job offers in cutting-edge AI research, development, and innovation.”
• The Trump administration should also “work with Congress to support AI literacy efforts for the American people” and provide them with the “necessary education and information to make informed decisions about their AI use and consumption.
• “Where practicable, U.S. agencies should use existing immigration authorities to facilitate recruiting and retention of experts in occupations requiring AI-related skills, such as AI development, robotics and automation, and quantum computing.”
• “The United States is losing ground to China in the race to become Africa’s preferred AI partner. Over the past few years, the U.S. government has only offered vague commitments and diplomatic statements to the continent, while China has taken concrete action…. The United States should get proactive about strengthening strategic ties and better positioning itself as the preferred partner for AI innovation in emerging markets… DeepSeek’s open-source approach has already made it a preferred choice for many developers in Africa. If the United States wants to remain competitive, it should ensure its own AI companies stay at the forefront of open-source innovation. That means continuing to resist undue restrictions on open- source AI and open model weights, ensuring American-developed models remain accessible and widely adopted.”
• “The Biden administration created the U.S.-China AI Working Group but it only convened twice, with few tangible outcomes. The Trump administration’s new AI Action Plan should reframe this group as a technical expert body to tackle shared AI risks and reduce tensions without undermining America’s AI lead. This reformulated group would serve as a body to discuss shared AI risks, instead of acting as a forum for comprehensive political changes in the U.S.-China relationships. This means avoiding politically contentious or overly broad areas of discussion, such AI disinformation or its effect on human rights, and focusing instead on narrow, less politically contentious technical problems ripe for scientific collaboration, such as identifying and responding to dangerous behaviors in AI models, including deception, attempted self-replication, or circumventing human control.”
• “Establishing frameworks for international cooperation and discussion channels for emerging AI- accelerated biotech issues remains crucial, despite anticipated Chinese resistance to joining such an initiative. Following the model of the U.S. ‘Political Declaration on Responsible Military Use of AI and Autonomy,’ articulating guiding principles during early development stages can positively influence technological trajectories for both participating and non-participating nations.”
• “The U.S. government should work with aligned countries to develop the international standards needed for advanced model capabilities and to drive global alignment around risk thresholds and appropriate security protocols for frontier models. This includes promulgating an international norm of “home government” testing—wherein providers of AI with national security-critical capabilities are able to demonstrate collaboration with their home government on narrowly targeted, scientifically rigorous assessments that provide ‘test once, run everywhere’ assurance.”
• “The U.S. government should oppose mandated disclosures that require divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models. Overly broad disclosure requirements (as contemplated in the EU and other jurisdictions) harm both security and innovation while providing little public benefit.”
• “While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans.”
• The U.S. government should require countries “to sign government-to-government agreements outlining measures to prevent smuggling. As a prerequisite for hosting data centers with more than 50,000 chips from U.S. companies, the U.S. should mandate that countries at high-risk for chip smuggling comply with a government-to-government agreement that 1) requires them to align their export control systems with the U.S., 2) takes security measures to address chip smuggling to China, and 3) stops their companies from working with the Chinese military.”
• The U.S. government should also “consider reducing the number of H100s that Tier 2 countries can purchase without review to further mitigate smuggling risks.”
• The U.S. Department of Commerce’s Bureau of Industry and Security should analyze the “potential commercial, economic and competitiveness effects” of export controls and consult with potentially affected industries, as well as “advocate that key allies embrace comparable controls to ensure that U.S. companies are not uniquely disadvantaged.”
• “Rather than focusing narrowly on restricting access, U.S. policy should pivot towards bolstering domestic AI capabilities, enhancing global export competitiveness, and advocating for reciprocal market access. If China continues gaining ground despite restrictions while U.S. firms lose opportunities abroad, the current approach will have done more harm than good.”
• “The Bureau of Industry and Security (BIS) should take a more proactive approach by tightening and enforcing export controls. Current export controls focus on restricting finished AI chips, but gaps in the supply chain undermine their effectiveness… To close these gaps, BIS should expand restrictions to cover upstream components and advanced packaging materials, apply U.S. controls to any technology using American IP regardless of where it is manufactured, and strengthen enforcement on suppliers facilitating these workarounds. Without these measures, China will continue stockpiling essential AI hardware while U.S. firms lose market access without achieving meaningful strategic gains.”
• To address chip smuggling into China, Congress should “significantly increase BIS’s budget to enhance its monitoring and enforcement capabilities, including hiring additional technical specialists and field investigators.”
• For the “broader U.S. export strategy to work,” BIS should “clearly articulate and justify the objectives of the export controls to allies.”
• “The U.S. government should adequately resource and modernize the Bureau of Industry and Security (BIS), including through BIS’s own adoption of cutting-edge AI tools for supply chain monitoring and counter-smuggling efforts, alongside efforts to streamline export licensing processes and consideration of wider ecosystem issues beyond limits on hardware exports.”
• OpenAI proposes maintaining the three-tiered framework of the AI diffusion rule but expanding the countries in Tier I (countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.)
• “This strategy would encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage. Making sure that open-sourced models are readily available to developers in these countries also will strengthen our advantage. We believe the question of whether AI should be open or closed source is a false choice—we need both, and they can work in a complementary way that encourages the building of AI on American rails.”
• The U.S. government should “task federal agencies with streamlining permitting processes by accelerating reviews, enforcing timelines, and promoting inter-agency coordination to eliminate bureaucratic bottlenecks.”
• “Some authoritarian regimes who do not share our country’s democratic values and may pose security threats are already actively courting American AI companies with promises of abundant, low-cost energy. If U.S. developers migrate model development or storing of model weights to these countries in order to access these energy sources, this could expose sensitive intellectual property to transfer or theft, enable the creation of AI systems without proper security protocols, and potentially subject valuable AI assets to disruption or coercion by foreign powers.”
• “The Administration should work to shorten decision timelines on environmental reviews, provide preliminary feedback on application completion and accuracy, and digitize operations to streamline processes, including application submissions, necessary document uploads, feedback for revisions and status updates.”
• The U.S. government should “partner with state and local regulators to create designated special compute zones that aim to—as much as possible—align permitting and regulatory frameworks across jurisdictions and minimize barriers to AI infrastructure development.”
• The U.S. government should adopt a “National Transmission Highway Act” to “expand transmission, fiber connectivity and natural gas pipeline construction” and streamline the processes of planning, permitting and paying to “eliminate redundancies.”
• The U.S. government should also develop a “Compact for AI” among U.S. allies and partners that streamlines access to capital and supply chains to compete with Chinese AI infrastructure alliances, as well as institute “AI Economic Zones” that “speed up permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors.”
• The U.S. government should also “eliminate regulatory and procedural barriers to rapid AI deployment at the federal agencies, for both civilian and national security applications” and “direct the Department of Defense and the Intelligence Community to use the full extent of their existing authorities to accelerate AI research, development, and procurement.”
• “We also encourage the White House to leverage existing frameworks to enhance federal procurement for national security purposes, particularly the directives in the October 2024 National Security Memorandum (NSM) on Artificial Intelligence and the accompanying Framework to Advance AI Governance and Risk Management in National Security.”
• “Additionally, we strongly advocate for the creation of a joint working group between the Department of Defense and the Office of the Director of National Intelligence to develop recommendations for the Federal Acquisition Regulatory Council (FARC) on accelerating procurement processes for AI systems while maintaining rigorous security and reliability standards.”
• “Agencies should establish clear visions for how AI will be used in sectors and AI adoption “grand challenges” (i.e., highly ambitious and impactful goals for how AI can transform an industry) to accelerate deployment in critical sectors.”
• The AI Action Plan can develop public trust in federal government’s use of AI by “building on agencies’ existing use case inventories – a key channel for the public to learn information about how agencies are using and governing AI systems and for industry to understand AI needs within the public sector – and by requiring agencies to provide public notice and appeal when individuals are affected by AI systems in high-risk settings.”
• “The AI Action Plan should recognize that independent external oversight is also critically important to promote safe, trustworthy, and efficient use of AI in the national security/intelligence arena. Many such uses will be classified and exposure of them could put national security at risk. At the same time, because the risk of abuse and misuse is high when such functions are kept secret, an oversight mechanism with expertise, independence and power to access relevant information (even if classified) should be established in the Executive Branch. CDT has recommended that Congress establish such a body, and the AI Action Plan should support such an approach.”
• The U.S. military can address the concerns of potential coordination conflicts “by working across services to clarify concepts of employment and identify potential points of conflict between friendly heterogeneous AI and autonomous systems.”
• The Office of the Secretary of Defense (OSD) has “not empowered a DOD-wide entity to set AI policies for the services. This results in duplication of efforts across the military services, with multiple memos guiding efforts across the DOD in different ways. For example, within each service, different commands have different network ATO standards, which require substantial rework by the government and AI vendors to satisfy before deployment. Continuous ATOs and ATO reciprocity must be enforced across OSD and an entity should be empowered to synchronize policies, rapidly certify reliable AI solutions, and act to stop emerging security issues.”
• “Federal agencies should avoid implementing unique compliance or procurement requirements just because a system includes AI components. To the extent they are needed, any agency-specific guidelines should focus on unique risks or concerns related to the deployment of the AI for the procured purpose.”
• The U.S. government should establish a “faster, criteria-based path for approval of AI tools” and “allow federal agencies to test and experiment with real data using commercial-standard practices—such as SOC 2 or International Organization for Standardization (ISO) audit reports—and potentially grant a temporary waiver for FedRAMP. AI vendors would still be required to meet FedRAMP continuous monitoring requirements while awaiting full accreditation.”
• The U.S. government should “preserve the AI Safety Institute in the Department of Commerce and build on the MOUs it has signed with U.S. AI companies—including Anthropic—to advance the state of the art in third-party testing of AI systems for national security risks.”
• The White House should also “direct the National Institutes of Standards and Technology (NIST), in consultation with the Intelligence Community, Department of Defense, Department of Homeland Security, and other relevant agencies, to develop comprehensive national security evaluations for powerful AI models, in partnership with frontier AI developers, and develop a protocol for systematically testing powerful AI models for these vulnerabilities.”
• “To mitigate these risks, the federal government should partner with industry leaders to substantially enhance security protocols at frontier AI laboratories to prevent adversarial misuse and abuse of powerful AI technologies.”
• “AI systems are advancing at an unprecedented pace, and it’s only a matter of time before intentional or inadvertent harm from AI threatens U.S. national security, economic stability, or public safety. The U.S. government must act now to ensure it has insights into the capabilities of frontier AI models before they are deployed and that it has response plans in place for when failures inevitably occur. To fill this critical preparedness gap, President Trump should immediately direct the Department of Homeland Security (DHS) to establish an AI Emergency Response Program as a public-private partnership. Under this program, frontier AI developers like OpenAI, Anthropic, DeepMind, Meta, and xAI would participate in emergency preparedness exercises.”
• “These preparedness exercises would involve realistic simulations of AI-driven threats, explicitly requiring participants to actively demonstrate their responses to unfolding scenarios. Similar to the DHS-led ‘Cyber Storm’ exercises, which rigorously simulate cyberattacks and test real-time interagency and private-sector coordination, these AI-focused simulations should clearly define roles and responsibilities, ensure swift and effective communication between federal agencies and frontier AI companies, and systematically identify critical gaps in existing response protocols… Most frontier AI developers have already made voluntary commitments to share the information needed to create these exercises. To encourage additional companies to participate, this type of cooperation should be treated as a prerequisite for federal contracts, grants, or other agreements involving advanced AI.”
• “In the near future, small autonomous drones will pose a threat to U.S. civilians on par with large strategic missiles. To meet this threat, the Administration should procure and distribute equipment for disabling unauthorized drones, and ensure that there are clear lines of legal authority for civilian law enforcement to deploy this equipment.”
• “Federal agencies should take steps to align all AI uses with existing privacy and cybersecurity requirements – such as requirements for agencies to conduct privacy impact assessments – and to proactively guard against novel privacy and security risks introduced by AI.”
• “The administration should empower the AISI as a hub of AI expertise for the broader federal government to ensure AI strengthens rather than undermines U.S. national security. The administration could further support this AI hub of expertise with continued implementation of the AI National Security Memorandum, which strengthens engagement with national security agencies to better integrate expertise across classified and non-classified domains.
• “The federal government needs a systematic way to track and learn from real-world incidents. A central reporting system for AI-related incidents would allow the government to investigate and update its approach to evaluations where appropriate.”
• “The federal government should significantly ramp up efforts to monitor China’s AI ecosystem, including the Chinese government itself (at all relevant levels and organizations), related actors such as state-owned enterprises, state research labs, and state-sponsored technology investment funds, and other actors, such as universities and tech companies.”
• “The U.S. government should partner with AI companies to share suspicious patterns of user behavior and other types of threat intelligence. In particular, the Intelligence Community and the Department of Homeland Security should partner with AI companies to share cyber threat intelligence, and the Department of Homeland Security should partner with AI companies to prepare for potential emergencies caused by malicious use or loss of control over AI systems. In addition, the Department of Commerce should receive, triage, and distribute reports on CBRN and cyber capabilities of frontier AI models to support classified evaluations of novel AI-enabled threats, building on a 2024 Memorandum of Understanding between the Departments of Energy and Commerce.”
• The Trump administration should “implement a mandatory AI incident reporting regime for sensitive applications across federal agencies. Federal agencies deploy AI systems for a wide range of safety- and rights-impacting use cases, such as using AI to deliver government services or predict criminal recidivism. AI failures, malfunctions, and other incidents in these contexts should be tracked and investigated to determine their root cause, inform risk management practices, and reduce the risk of recurrence.”
• “The Trump administration should establish a secure line for employees to report problematic company practices, such as failure to report system capabilities that threaten national security.”
• The U.S. government should “Define capabilities of concern and support the creation of threat profiles for different types of AI models… . A coalition of government agencies should develop frameworks that clearly define risky capabilities, including chem-bio capabilities of concern, so evaluators know what risks to test for. These frameworks could draw upon Appendix D of the National Institute of Standards and Technology’s (NIST) draft Managing Misuse Risk for Dual-Use Foundation Models. In addition, government agencies should build threat profiles that consider different combinations of users, AI tools, and intended outcomes, and design targeted policy solutions for these highly variable scenarios.
• “The Trump administration should empower AISI to develop quantitative benchmarks for AI, including benchmarks that test a model’s resistance to jailbreaks, usefulness for making CBRN weapons, and capacity for deception… AISI should develop standards that cover topics including model training, pre-release internal and external security testing, cybersecurity practices, if-then commitments, AI risk assessments, and processes for testing and re-testing systems as they change over time.”
• “It is particularly valuable for the U.S. government to develop and maintain an ability to evaluate the capabilities of frontier models in areas where it has unique expertise, such as national security, CBRN issues, and cybersecurity threats. The Department of Commerce and NIST can lead on: (1) creating voluntary technical evaluations for major AI risks; (2) developing guidelines for responsible scaling and security protocols; (3) researching and developing safety benchmarks and mitigations (like tamper-proofing); and (4) assisting in building a private-sector AI evaluation ecosystem.”
• “The U.S. government should support the further development and broad uptake of evolving multistakeholder standards and best practices around disclosure of synthetic media—such as the use of C2PA protocols, Google’s industry-leading SynthID watermarking, and other watermarking/provenance technologies, including best practices around when to apply watermarks and when to notify users that they are interacting with AI-generated content.”
• “In contrast, an NDF recognizes that in many critical areas, the U.S. lacks the necessary high-quality, AI-ready data not just in the public sector, but also in key private-sector domains. Rather than just improving discoverability, the NDF would fund the creation, structuring, and strategic enhancement of both public and private-sector datasets”
• “Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rights holders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation.”
• To ensure the copyright system “continues to support American AI leadership,” the U.S. government should work to “prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress” and encourage “more access to government-held or government-supported data. This would boost AI development in any case, but would be particularly important if shifting copyright rules restrict American companies’ access to training data.”
• The U.S. government should also partner with industry to “develop custom models for national security. The government needs models trained on classified datasets that are fine-tuned to be exceptional at national security tasks for which there is no commercial market—such as geospatial intelligence or classified nuclear tasks. This will likely require on-premises deployment of model weights and access to significant compute, given the security requirements of many national security agencies.”
• “The sufficiency of existing copyright law notwithstanding, we remain concerned that many AI stakeholders have used copyright protected material to build and operationalize their models without consent, in ways damaging to publishers. While the legality of such activities are the subject of litigation, there is a danger that it will not be possible to undo the damage before a judicial resolution can occur. The AI Action Plan should therefore encourage AI developers to engage more collaboratively with content industries in a manner that serves the broader national interest and a win-win result for our global aspirations.”
• “The Administration should push back on the flawed text and data mining (TDM) opt-out frameworks being considered or recently adopted in various countries. These opt-out policies do not work, have the potential to harm American creators and businesses through the uncompensated taking of their property, overregulate content licensing, and turn copyright law and free market licensing upside down.”
link