A critical analysis of America's AI Action Plan (part 1) #55
America's AI Action Plan outlines thirty policies for technological leadership, but can national interests align with global AI governance? I've analyzed each point for feasibility and ethics.
(Service Announcement)
This newsletter (which now has over 5,000 subscribers and many more readers, as it’s also published online) is free and entirely independent.
It has never accepted sponsors or advertisements, and is made in my spare time.
If you like it, you can contribute by forwarding it to anyone who might be interested, or promoting it on social media.
Many readers, whom I sincerely thank, have become supporters by making a donation.
Thank you so much for your support, now it's time to dive into the content!
As the United States prepares to navigate the complex landscape of artificial intelligence governance under new leadership, the recently unveiled "America's AI Action Plan" represents a pivotal moment in the intersection of technology policy, national security, and international cooperation. This comprehensive 30-point strategy, structured around three fundamental pillars, accelerating AI innovation, building American AI infrastructure, and leading in international AI diplomacy and security, offers a window into how the world's largest economy intends to position itself in the global AI race.
The plan emerges at a particularly crucial juncture in AI development, where the decisions made by major powers will inevitably shape the trajectory of this transformative technology for decades to come. Unlike previous technological revolutions that unfolded over generations, artificial intelligence presents unique challenges that demand immediate attention: its dual-use nature means the same technologies that could revolutionize healthcare and education could also be weaponized or used to undermine democratic institutions. The speed of AI advancement, coupled with its potential for exponential impact across every sector of society, creates an unprecedented policy challenge that requires balancing innovation with responsibility, competition with cooperation, and national interests with global stability.
What makes this particular action plan especially significant is its departure from previous approaches to AI governance, which emphasized multilateral cooperation and shared international standards. Instead, the document reveals a distinctly more nationalist approach, prioritizing American technological dominance while simultaneously seeking to counter perceived threats from competing powers, particularly China. This strategic pivot reflects broader geopolitical tensions but also raises fundamental questions about whether artificial intelligence, a technology with inherently global implications, can be effectively governed through national-first policies.
The plan's three-pillar structure reveals a sophisticated understanding of AI's multifaceted challenges, yet each pillar embodies tensions that may ultimately undermine its effectiveness. The first pillar's emphasis on removing regulatory barriers to accelerate innovation, while economically appealing, raises concerns about whether deregulation might compromise the very safety measures that ensure AI systems remain beneficial rather than harmful. The second pillar's focus on building American AI infrastructure, though necessary for maintaining technological leadership, demands enormous investments in energy generation and semiconductor manufacturing that could have significant environmental and economic implications. The third pillar's approach to international AI diplomacy, perhaps most controversially, appears to prioritize competitive advantage over collaborative governance, potentially fragmenting global efforts to address AI risks that transcend national boundaries.
To provide a comprehensive assessment of this ambitious strategy, I have undertaken a systematic examination of all thirty points contained within the action plan, subjecting each proposal to rigorous dual-criteria evaluation. Every initiative has been analyzed first through the lens of technical and practical feasibility, assessing whether the proposed measures can realistically be implemented given current technological capabilities, bureaucratic structures, and resource constraints. Then, each point has been evaluated from an ethical perspective, considering its potential impact on global cooperation, technological equity, democratic values, and the broader implications for humanity's relationship with artificial intelligence.
This methodological approach reveals striking disparities between what is technically achievable and what is ethically advisable, uncovering fundamental tensions within the plan that may ultimately determine its success or failure. While many proposals demonstrate sound technical foundations and clear pathways to implementation, their ethical implications often present more complex challenges, particularly regarding international cooperation, technological access, and the concentration of AI capabilities within specific geopolitical spheres.
The analysis that follows examines how these feasibility and ethical considerations interact across the plan's three pillars, revealing whether America's vision for AI leadership can simultaneously serve national interests while contributing to the kind of global cooperation that artificial intelligence's transformative potential demands. The stakes could not be higher: the choices made today about AI governance will determine whether this technology becomes a force for shared prosperity and human flourishing, or whether it exacerbates existing inequalities and creates new forms of international instability.
Pillar 1: Accelerate AI Innovation
1. Remove red tape and onerous regulation
This policy aims to eliminate bureaucratic obstacles that hinder AI innovation by rescinding Biden's Executive Order 14110, which the Trump administration views as foreshadowing an overly regulatory regime. The plan includes launching a Request for Information to identify Federal regulations impeding AI development, working with OMB to revise or repeal unnecessary rules, and limiting federal funding to states with burdensome AI regulations. The FCC would evaluate whether state AI regulations interfere with federal communications authority, and the FTC would review investigations to ensure they don't unduly burden AI innovation.
Feasibility: 7/10 - Technically possible through executive orders and administrative changes, but could face legal resistance from states and constitutional challenges regarding states' rights.
Ethical evaluation: 4/10 - Efforts to hoard or steer critical resources like compute within a small handful of wealthy, industrialized democracies may also come with diplomatic costs. Extreme deregulation could increase risks to AI security and ethics.
2. Ensure that frontier AI protects free speech and American values
The administration plans to ensure AI systems reflect "truth and objectivity" rather than "social engineering agendas." This includes revising NIST's AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion (DEI), and climate change. Federal procurement guidelines would be updated to contract only with frontier LLM developers whose systems are deemed "objective and free from top-down ideological bias." Additionally, DOC through NIST's CAISI would conduct research and publish evaluations of frontier models from China for alignment with CCP talking points and censorship.
Feasibility: 5/10 - Artificial intelligence can be partially biased. AI tools are created by humans who make choices that influence the outputs produced by artificial intelligence. Difficult to guarantee absolute objectivity in AI models.
Ethical evaluation: 3/10 - Generative AI systems trained to spout certain ideological views could reinforce political echo chambers and worsen partisan biases. Imposing specific "American values" could limit international cooperation. The fear of the so-called “woke ideology” is totally ridiculous and has nothing scientific about it
3. Encourage open-source and open-weight AI
The plan recognizes that open-source and open-weight AI models have unique value for innovation, allowing startups to use them flexibly without dependence on closed model providers. Proposed actions include improving the financial market for compute by creating spot and forward markets similar to commodities, partnering with leading tech companies to increase research community access to computing resources, building foundations for a sustainable NAIRR (National Artificial Intelligence Research Resource) operations capability, and convening stakeholders to drive adoption of open models by small and medium-sized businesses.
Feasibility: 8/10 - Technically achievable with adequate investments. Successful precedents exist in open-source software.
Ethical evaluation: 8/10 - Open-source approach promotes global innovation, reduces inequalities in AI access, and fosters transparency. The United States should instead focus, in the words of analysts Matthew Mittelsteadt and Keegan McBride, on "embracing competition and openness".
4. Enable AI adoption
Recognizing that limited and slow adoption of AI is a major bottleneck, particularly in critical sectors like healthcare, the plan proposes establishing regulatory sandboxes or AI Centers of Excellence where researchers and enterprises can rapidly deploy and test AI tools while committing to open data sharing. Domain-specific efforts in healthcare, energy, and agriculture would convene stakeholders to accelerate development of national AI standards.
Feasibility: 6/10 - In 2024, we saw an increased focus on AI safety with the launch of new AI safety institutes. Sandbox models already exist in other countries.
Ethical evaluation: 8/10 - Balanced approach that promotes innovation while maintaining safeguards. Fosters competitiveness without excluding other countries.
5. Empower American workers in the age of AI
The Trump Administration's worker-first AI agenda includes multiple initiatives to help workers navigate the AI transition. The plan emphasizes that AI should complement work, not replace it.
Feasibility: 7,5/10 - Training and research programs are technically achievable with adequate funding.
Ethical evaluation: 9/10 - AI automation threatens widespread job displacement, exacerbates inequality and accelerates the need to transition to a rapidly-evolving job market. Essential to avoid social inequalities.
6. Support next-generation manufacturing
The plan emphasizes that AI will enable innovations in the physical world including autonomous drones, self-driving cars, and robotics.
Feasibility: 6/10 - Achievable with investments and inter-agency coordination, but requires time to develop competitive manufacturing capabilities.
Ethical evaluation: 7/10 - Important for economic competitiveness but must be balanced with environmental and social considerations.
7. Invest in AI-enabled science
The plan recognizes that AI will transform science itself, with AI systems already generating models of protein structures and novel materials. Proposed investments include automated cloud-enabled labs for various scientific fields built by private sector and research institutions in coordination with DOE (Department of Energy) National Laboratories. Long-term agreements would support Focused-Research Organizations using AI for fundamental scientific advancements. Federal agencies would incentivize researchers to release high-quality datasets publicly and require disclosure of non-proprietary datasets used by AI models during research.
Feasibility: 8/10 - Technically achievable with adequate infrastructure investments and coordination between research institutions.
Ethical evaluation: 7,5/10 - Open scientific research using AI can accelerate discoveries that benefit all humanity.
8. Build world-class scientific datasets
Recognizing that high-quality data has become a national strategic asset, the plan aims to lead creation of the world's largest AI-ready scientific datasets. The NSTC (National Science and Technology Council) Machine Learning and AI Subcommittee would make recommendations on minimum data quality standards for biological, materials science, chemical, and physical data.
Feasibility: 7/10 - Requires significant coordination across agencies and careful balance between data access and privacy protections.
Ethical evaluation: 8/10 - The AI advisory body of the UN, in its report "Governing AI for Humanity," established the first guiding principle that AI should be governed inclusively, by and for the benefit of all. High-quality datasets advance global scientific progress.
9. Advance the science of AI
Acknowledging that future breakthroughs may transform what is possible with AI just as LLMs represented a paradigm shift, the plan prioritizes investment in theoretical, computational, and experimental research to preserve America's leadership in discovering new AI paradigms. This priority would be reflected in the forthcoming National AI R&D Strategic Plan, ensuring strategic targeted investment in the most promising paths at the frontier of AI science.
Feasibility: 8/10 - Basic research investments have proven track record of yielding breakthroughs, though timelines are unpredictable.
Ethical evaluation: 8,5/10 - Fundamental AI research advances benefit the global scientific community and humanity as a whole.
10. Invest in AI interpretability, control, and robustness breakthroughs
Acknowledging that the inner workings of frontier AI systems are poorly understood, making it challenging to use advanced AI in defense and national security applications, the plan proposes launching a technology development program led by DARPA in collaboration with CAISI and NSF. This would advance AI interpretability, control systems, and adversarial robustness. The program would coordinate AI hackathon initiatives and academic partners to test AI systems for transparency, effectiveness, use control, and security vulnerabilities.
Feasibility: 4/10 - Extremely complex research area with uncertain progress and long timelines.
Ethical evaluation: 10/10 - Critical for global AI safety and building trust in AI systems.
11. Build an AI evaluations ecosystem
The plan recognizes evaluations as critical tools for defining and measuring AI reliability and performance.
Feasibility: 8/10 - Efforts are also underway to promote international cooperation and harmonization of AI rules. Existing evaluation frameworks can be expanded.
Ethical evaluation: 9/10 - Essential for AI safety and reliability globally.
12. Accelerate AI adoption in government
To enable the federal government to serve the public with greater efficiency through AI, the plan formalizes the Chief Artificial Intelligence Officer Council (CAIOC) as the primary venue for interagency coordination and collaboration on AI adoption. It creates a talent-exchange program for rapid details of specialized AI talent between agencies, develops an AI procurement toolbox managed by GSA for uniformity across the federal enterprise, implements an Advanced Technology Transfer and Capability Sharing Program, mandates that all federal employees whose work could benefit from frontier language models have access to such tools, and convenes agencies with High Impact Service Providers to pilot AI for improving service delivery.
Feasibility: 7/10 - Achievable but requires overcoming significant bureaucratic inertia and cultural resistance to change.
Ethical evaluation: 7/10 - Government efficiency improvements benefit citizens, though care must be taken to ensure AI use respects privacy and civil liberties.
13. Drive adoption of AI within the department of defense
Recognizing AI's potential to transform both warfighting and back-office DOD (Department of Defense) operations, the plan includes specific actions for military AI adoption. These include identifying talent and skills required for DOD's workforce to leverage AI at scale, establishing an AI & Autonomous Systems Virtual Proving Ground, developing streamlined processes for classifying and automating workflows, prioritizing DOD-led agreements with cloud providers for priority access during national emergencies, and growing Senior Military Colleges into hubs of AI research and talent building with AI-specific curriculum throughout majors.
Feasibility: 5,5/10 - Military bureaucracy presents unique challenges, but strong leadership commitment can drive change.
Ethical evaluation: 4/10 - While military efficiency is important, the weaponization of AI raises serious ethical concerns globally. The use of AI as a weapon is identified as a major risk requiring international cooperation.
14. Protect commercial and government AI innovations
Recognizing the need to balance dissemination of cutting-edge AI technologies with national security concerns, the plan proposes collaboration between several departments with leading American AI developers. This would enable the private sector to actively protect AI innovations from security risks including malicious cyber actors and insider threats. The initiative aims to effectively address security risks to American AI companies, talent, intellectual property, and systems while maintaining competitiveness.
Feasibility: 7/10 - Possible with industry-government cooperation, but difficult balance between security and openness.
Ethical evaluation: 3/10 Rationale: Necessary for security but risks creating barriers to global innovation.
15. Combat synthetic media in the legal system
Building on the Take It Down Act championed by First Lady Melania Trump, the plan tackles the challenges that media produced through artificial intelligence pose to the legal system. Proposed measures include turning the National Institute of Standards and Technology’s “Guardians of Forensic Evidence” deepfake evaluation program into formal guidelines with optional forensic benchmarks; directing the Department of Justice to issue guidance so that agencies handling adjudications consider deepfake standards similar to those in Rule 901(c) of the Federal Rules of Evidence; and submitting formal comments on any future proposals to amend those rules with deepfake-related provisions.
Feasibility: 8/10 - Deepfake-detecting technologies exist, and use the same machine learning technology that generative AI uses.
Ethical evaluation: 6/10 - Critical for maintaining legal system integrity and public trust. It is essential that this does not become a limitation on people’s freedom of expression.
In the next issue, which will arrive like clockwork next Monday, we will analyze the other two pillars of the paper and draw conclusions
Don’t forget to subscribe!
Even in this field, we are only at the beginning.
(Service Announcement)
This newsletter (which now has over 5,000 subscribers and many more readers, as it’s also published online) is free and entirely independent.
It has never accepted sponsors or advertisements, and is made in my spare time.
If you like it, you can contribute by forwarding it to anyone who might be interested, or promoting it on social media.
Many readers, whom I sincerely thank, have become supporters by making a donation.
Thank you so much for your support!