On July 23, the Trump administration released the AI Action Plan developed in accordance with Executive Order 14179. In our first in a series of client updates, we discuss Pillar I of the Plan.

Background

President Trump formally introduced America’s AI Action Plan (the “Plan”) during his keynote speech at the “Winning the AI Race” summit on July 23, 2025.  The Plan follows President Trump’s revocation of former President Biden’s Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of AI) from October 2023 and President Trump’s own Executive Order 14179 (Removing Barriers to American Leadership in AI) during the first week of his administration (as described in our prior client update).  Among other things, Executive Order 14179 established that it is U.S. policy “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”  It also directed White House staff, including the Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology (APST), and the Assistant to the President for National Security Affairs (APNSA), in coordination with relevant agencies, to develop an action plan within 180 days to achieve that policy goal. On April 24, 2025 – only two months after it issued a Request for Information – the White House Office of Science and Technology Policy (OSTP) published more than 10,000 comments received from the public.

The 23-page Plan frames the development of AI technologies as a global competition akin to the Cold War-era space race, with the corresponding potential to fundamentally impact swaths of the American economy, including healthcare and pharma, energy, theoretical and applied sciences, education, media, communications and manufacturing. The Plan offers a commensurately sweeping series of recommendations, directives and proposals, across three pillars: (i) accelerating AI innovation; (ii) building American AI infrastructure; and (iii) leading in international diplomacy and security.  Collectively, these pillars entail over 100 federal policy actions for regulating, implementing and investing in artificial intelligence, approximately 60 of which are detailed in Pillar I.

This update, the first in a series, focuses on Pillar I.

Pillar I – Innovation

The first pillar in the Plan seeks to foster innovation in AI technology and adoption through a whole-of-government series of recommended policy actions. These include identifying and reducing regulations and policies that “hinder AI development or deployment,” defining high-quality datasets as a national strategic asset, publishing a National AI Research and Development Strategic Plan, and organizing and funding initiatives to support AI-related science, research and development, and the supply chain. The Plan contains numerous avenues for federal agencies to seek input from the private sector.  Notably, it proposes conditioning federal funding to states on their avoidance of “unduly restrictive” AI policies, a strategy that echoes the proposed moratorium on enforcement of state AI laws that was stripped from the administration’s recent budget reconciliation bill at the eleventh hour.

Pillar I also explicitly acknowledges the likelihood that AI adoption will displace aspects of the American workforce; it directs federal studies on this subject, the creation of re-training programs, and the reclassification of AI-related job training to incentivize employers to offer tax-free reimbursement for such training.

To achieve its goals, Pillar I directs the following policy actions, among others:

Regulation and oversight

Perhaps most notable for the private sector is the Plan’s directive to identify and rescind or modify any regulations that impinge on development of AI technologies or their adoption.  

The Plan directs the OSTP to seek public feedback about current Federal regulations that “hinder” AI innovation and adoption, with a corresponding directive to “take appropriate action.”   Similarly, the Office of Management and Budget (“OMB”) is directed to coordinate with federal agencies to identify, revise or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements and interagency agreements that could “unnecessarily hinder AI development or deployment.”  Notably, the directive is sector-agnostic, meaning that effectively any agency or industry could be within scope.  The Plan also directs OMB to work with agencies that have AI-related discretionary funding programs to ensure such funding is directed toward states with less restrictive, directionally consistent AI regulations, and to consider a state’s “regulatory climate” when making funding decisions.

In addition to reviewing regulation and funding decisions, the Plan directs the Federal Trade Commission (FTC) to review open investigations commenced under the previous administration to ensure they do not “advance theories of liability that unduly burden AI innovation[.]” In addition, the Plan directs a review of all FTC final orders, consent decrees and injunctions, and, where appropriate, to seek to modify any that unduly burden AI.

Despite the overarching theme of deregulation, the Plan also identifies potential areas for AI regulation and previews likely government oversight themes, such as ensuring that American AI is free from “ideological bias” and “social engineering agendas.”  To this end, the Plan directs updates to the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, as well as federal procurement guidelines, to eliminate references to “misinformation,” diversity, and climate change, and to limit government contracts with frontier large language model (LLM) developers to those who ensure that their systems meet these requirements.

Likewise, Pillar I expresses concern about the increasing prevalence of synthetic media and corresponding risks it may present to the U.S. legal system – for example, the use of deepfake content in courtroom settings.  The Plan directs NIST and the Department of Justice (DOJ) to develop guidelines governing the potential of deepfakes to distort evidentiary proceedings, and to explore potential revisions to the Federal Rules of Evidence.  Alongside the recently enacted Take It Down Act and a wave of similar state laws, this aspect of Pillar I emphasizes a continuing political focus on the risks presented by AI-generated media.

Finally, and consistent with the US government’s view of certain data as a strategic national asset (see our recent update on DOJ’s rulemaking restricting bulk overseas transfer of sensitive U.S. data), the Plan also directs agencies to collaborate with “leading American AI developers” on developing and refining security and integrity protections for AI innovations in the private sector, including with respect to malicious cyber actors, insider threats, and others.

Investment and AI adoption initiatives

In addition to the above-referenced regulatory directives, the Plan sets forth a number of structural economic and investment initiatives aimed at prioritizing investment in AI-enabling sectors—for example, the market for computing power, manufacturing and supply chain, job training, industry-specific initiatives, and scientific research and development.

These initiatives include:

  • Publishing a National AI Research & Development Strategic Plan, led by OTSP, to guide federal investments
  • Enhancing access to large-scale computing power by improving the financial market for compute, using the commodities spot and forward markets as a model
  • Supporting next-generation, AI-enabled manufacturing, and working with industry to identify supply chain challenges faced by American robotics and drone manufacturing
  • Establishing regulatory sandboxes to rapidly deploy and test AI tools, including under the purview of the Food and Drug Administration (FDA) and Securities and Exchange Commission (SEC).
  • Launching private sector-specific efforts in healthcare, energy and agriculture to facilitate the development and adoption of national AI standards
  • Prioritizing AI-related educational and workforce training programs, reclassifying such programs as eligible for tax-free employer reimbursement, and piloting and funding programs to ameliorate the impacts of job displacement on the American workforce

Technology and data

The Plan also seeks to facilitate the development of high-quality datasets, open-source and open-weight AI models, and likewise enhancing the “interpretability” of large language models’ operations.

To achieve these goals, the Plan recommends:

  • That the National Science and Technology Council seek to establish minimum data quality standards for use in biological, materials science, chemical, physical and other scientific data modalities in model training
  • Establishing secure government compute environments to enable controlled access to otherwise restricted federal data for AI use cases
  • Encouraging stakeholders to develop open-source and open-weight models that small and medium-size businesses can easily adopt
  • Through a whole-of-government approach, launching technology development programs that aim to make LLM output more predictable and understandable, noting that “[t]oday, the inner workings of frontier AI systems are poorly understood[,]” and presenting that as an obstacle to the use of AI in particularly high-stakes settings

Takeaways

  • The Plan offers the clearest signal to date of an AI deregulatory phase at the federal level. Additionally, the Plan’s directive that federal agencies seek to condition federal funding on states’ alignment with U.S. policy to accelerate innovation is likely to put substantial pressure on states’ actions in this space.
  • Relatedly, federal agencies are likely to seek comment on, or actively initiate modifications to regulations that might restrict AI development.  However, companies should move cautiously in streamlining current AI compliance programs, since the hard-earned implementation of those programs may be difficult to reinstate once lifted, and such compliance practices might otherwise still serve to reduce private legal, business, reputational and other risks.
  • The Plan signals the administration’s concern about bias, data security, and the risks of deepfakes and synthetic media.  Companies should consider reviewing these aspects of their security and integrity programs, conducting risk assessments and assessing what logging and documentation can be used to demonstrate compliance with anti-bias requirements, and monitoring for updates to the NIST AI Risk Management Framework.
  • The Plan reflects a further step by the U.S. government in classifying data and technology as strategic national security assets.  Companies should consider conducting and documenting security assessments of their AI technology stack, procurement policies, and supply chain, including for risks arising from malicious cyber actors, insider threats, and third-party vendors and suppliers.

The new Trump administration callout

The New Trump Administration

Visit The New Trump Administration for the latest insights from across Davis Polk’s practices and offices on the evolution and impacts of the second Trump presidency.


This communication, which we believe may be of interest to our clients and friends of the firm, is for general information only. It is not a full analysis of the matters presented and should not be relied upon as legal advice. This may be considered attorney advertising in some jurisdictions. Please refer to the firm's privacy notice for further details.