There is a kind of regulation that works by choosing not to regulate. On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, a sweeping set of legislative recommendations to Congress that lays out how the United States intends to govern AI at the federal level. The document runs to dozens of pages and touches on everything from child safety to national security, from intellectual property to workforce development. But at its core sits a single, defining choice: the United States will not create a new federal AI regulatory agency. Instead, existing sector-specific regulators will oversee AI within their current jurisdictions, and industry-led standards will fill the gaps. Georgetown University’s Center for Security and Emerging Technology published a detailed analysis unpacking the framework’s implications, and their verdict is clear. This is not an absence of policy. It is a deliberate architectural decision about how power over one of the most transformative technologies in human history should be distributed, or more precisely, how it should not be concentrated. The Biden administration’s October 2023 executive order on AI safety had taken the opposite approach, mandating safety testing requirements and expanding federal agency oversight. The current administration revoked virtually all of those provisions, marking one of the sharpest and most consequential policy reversals in recent technology governance history. The pendulum has swung, and it has swung hard. (Georgetown CSET: Unpacking the White House National Policy Framework for AI)
The framework’s scope is strikingly broad. It addresses child safety, consumer protection, the energy costs of data centers powering AI systems, national security applications, intellectual property rights in an age of generative AI, free speech implications, innovation promotion, and workforce development for an economy increasingly shaped by automation. Within this sprawling agenda, the protection of minors receives the greatest emphasis. The framework recommends that Congress establish age-verification requirements for AI platforms likely to be accessed by children, give parents tools to manage their children’s privacy settings and content exposure, and impose severe penalties for the use of deepfake technology to create child sexual exploitation material. These proposals enjoy broad bipartisan support, which is precisely why they occupy such a prominent position in the document. Building a coalition around child safety is politically straightforward. Building one around algorithmic bias in hiring, the displacement of workers by AI automation, or the concentration of market power among a handful of AI companies is considerably more difficult. The framework’s relative silence on these structural issues is telling. It suggests a strategy of leading with consensus while deferring the harder questions, a pattern that is understandable in the short term but carries significant risks as the technology continues to advance faster than the political process can adapt. (WilmerHale: White House Releases National Policy Framework)
The decision not to create a new regulatory body rests on a specific logic. The administration argues that AI is too diffuse, too embedded across too many sectors, for any single agency to govern it effectively. Medical AI should be supervised by the FDA, which already understands the complexities of drug approval and medical device regulation. Financial AI should fall under the SEC and the CFPB, which have decades of experience with markets and consumer lending. Transportation AI belongs with NHTSA, which regulates vehicle safety. Each existing regulator brings domain expertise that a new, generalist AI agency would take years to develop. The framework also proposes the establishment of regulatory sandboxes, controlled environments where companies can test AI applications outside the constraints of existing regulation. The United Kingdom has already implemented AI sandboxes in its financial sector, and Singapore has launched similar programs. The American proposal draws on these international precedents, though it would operate at a scale and scope without parallel. Morrison Foerster’s analysis notes that the sandbox concept represents one of the framework’s more concrete and potentially actionable recommendations, though the details of implementation remain to be worked out by Congress and the relevant agencies. (Morrison Foerster: Trump Administration Releases National AI Policy Framework)
Federal preemption of state AI laws is by far the most contentious element of the framework. Over the past several years, states have moved aggressively to regulate AI in the absence of comprehensive federal legislation. California has passed transparency requirements for AI systems. Colorado has enacted a law prohibiting discriminatory AI decision-making in insurance and employment. Illinois requires employers to notify job applicants when AI is used in hiring decisions. New York City mandates bias audits for automated employment decision tools. Texas has introduced legislation governing the use of AI in law enforcement. This patchwork of state regulations has created a fragmented compliance landscape that technology companies argue costs billions of dollars annually and stifles innovation. The framework responds by recommending that Congress adopt legislation preempting state AI laws that impose undue burdens, while preserving state authority over traditional areas like child protection, fraud prevention, and consumer safety. The distinction sounds reasonable in principle. In practice, as Ropes and Gray’s analysis points out, the boundary between a legitimate state consumer protection law and an unduly burdensome AI regulation is anything but clear. The result will almost certainly be years of litigation, with federal courts asked to draw lines that Congress itself could not. State attorneys general have already signaled their opposition to broad preemption, arguing that federal inaction left them no choice but to act, and that stripping their authority now would leave consumers unprotected during the critical years while Congress debates and drafts comprehensive legislation. The parallels to the federal-state battles over data privacy regulation are striking and instructive. Just as the absence of a federal privacy law led to California’s CCPA becoming a de facto national standard, the absence of comprehensive federal AI legislation has allowed state-level AI regulations to proliferate. The framework seeks to prevent this dynamic from solidifying, but may be too late to reverse it entirely. Democratic members of Congress have already introduced counter-proposals that would limit the scope of preemption, strengthen federal oversight mechanisms, and mandate transparency and accountability requirements that go significantly beyond what the framework envisions. (Ropes & Gray: White House Legislative Recommendations)
CSET’s AI governance mapping project provides essential context for understanding the framework’s place in the broader regulatory landscape. The April 2026 update, conducted in collaboration with MIT’s AI Risk Initiative, catalogs over one thousand AI governance documents across federal, state, and international jurisdictions. Their analysis reveals a striking picture. At the federal level, there is no comprehensive AI law. Instead, there are executive orders that can be revoked by the next president, agency guidance documents with uncertain legal force, and a handful of sector-specific rules that were not designed with AI in mind. At the state level, there is a proliferation of laws and proposals that vary wildly in scope, stringency, and approach. Internationally, the European Union’s AI Act stands as the most ambitious attempt at comprehensive AI regulation, classifying AI systems into four risk tiers and imposing strict conformity assessments and transparency obligations on high-risk applications. The framework’s implicit argument is that the American approach need not replicate the EU model. It can achieve adequate governance through a lighter touch that preserves the dynamism and speed of the American innovation ecosystem. Whether that argument holds depends on whether the existing regulatory apparatus can actually keep pace with the technology. CSET’s data suggests the gap between regulatory capacity and technological advancement is widening, not narrowing. The sheer volume of governance documents, over a thousand and growing rapidly, reflects the scale of the challenge and the extent to which the world is struggling to keep up with the pace of AI development. No country has yet found a governance model that satisfies all stakeholders, and the diversity of approaches across jurisdictions creates additional complexity for companies operating globally. (CSET: Mapping the AI Governance Landscape April 2026)
The innovation chapter is the framework’s weakest section. CSET’s analysis identifies Section V, which addresses innovation promotion, as notably thin on specifics. It recommends regulatory sandboxes, improved access to federal datasets for AI training, and sector-specific oversight through existing regulatory authority. But it does not explain how, concretely, these measures will stimulate innovation, reduce barriers to deployment, or help American companies maintain their technological edge. The gap between stating a goal and implementing a policy is enormous, and the framework leaves virtually the entire implementation challenge to Congress and the agencies. The proposal to open federal datasets for AI training is perhaps the most promising idea in the section, but it immediately raises questions about data quality, privacy protection, and the potential for bias embedded in government-collected data to be amplified by AI systems trained on it. Meanwhile, AI investment in the United States exceeded one hundred billion dollars in 2025 alone, and the startup ecosystem is thriving. But for smaller companies that lack the legal teams and compliance budgets of the tech giants, regulatory uncertainty is the single greatest risk factor. The framework’s ambiguity does not resolve that uncertainty. It preserves it, and in a fast-moving market where first-mover advantage can determine a company’s survival, that preservation of uncertainty has real economic consequences that fall disproportionately on those least able to absorb them. The National Institute of Standards and Technology published its AI Risk Management Framework in 2023 to widespread praise, but as a voluntary guideline it lacks enforcement teeth. How the legislative framework would build on or supersede NIST’s work remains unclear, another gap in a document that is stronger on principles than on mechanisms. There is also the workforce dimension to consider. AI-driven automation is already reshaping labor markets, with estimates suggesting that AI-related job displacement in the United States reached sixteen thousand positions per month in recent analyses. The framework’s workforce development section acknowledges this challenge in general terms but offers no concrete programs, funding commitments, or timelines for reskilling initiatives. For workers whose jobs are being automated away, the promise of future workforce development programs is cold comfort when the automation is happening now.
The light-touch approach must be understood in the context of the geopolitical competition over AI supremacy. The United States is locked in an intensifying technological rivalry with China, and the fear of falling behind shapes every aspect of AI policy. Excessive regulation, the argument goes, would slow American companies while Chinese state-backed competitors, operating under a very different governance model, race ahead. The framework repeatedly invokes the language of maintaining American leadership and ensuring competitive advantage, making clear that its regulatory restraint is as much a national security calculation as an economic one. China enacted interim regulations on generative AI in 2025, pursuing its own distinctive path of state-directed industrial development combined with strict content control. In early 2026, a Chinese AI model called DeepSeek captured international attention, demonstrating that Chinese AI development capabilities had advanced further and faster than many Western analysts had assumed. On the other side of the spectrum, the EU’s strict AI Act has raised concerns about the compliance burden on European companies, with some observers warning that it could drive AI development offshore or simply slow the pace of European innovation relative to less regulated markets. The American framework positions itself as a third way between Chinese state control and European comprehensive regulation, seeking to harness market forces and industry self-governance while providing enough structure to manage the most salient risks. Whether self-governance can adequately manage the societal risks of increasingly powerful AI systems remains an open and deeply contested question. The history of industry self-regulation in other sectors, from financial derivatives before the 2008 crisis to social media content moderation, provides grounds for both optimism and skepticism. Industries can self-regulate effectively when incentives align, but they consistently struggle when the costs of responsible behavior are high and the penalties for irresponsible behavior are low or nonexistent. (Cooley: White House Releases AI Regulatory Blueprint)
For Japan’s AI policy, the American choice serves as a critical reference point. Japan’s government, operating under its AI Strategy 2025, has adopted a risk-based approach that stops well short of the EU’s comprehensive regulatory model. The Ministry of Economy, Trade and Industry has revised its AI Business Guidelines to promote self-regulation and industry-led standard-setting, an approach that shares significant philosophical overlap with the American framework. The fact that the United States has opted for sector-specific regulation through existing agencies carries two implications for Japan. First, it validates Japan’s own preference for avoiding heavy-handed regulation that could undermine international competitiveness. Second, and more uncomfortably, it raises the same question that Japan’s own policymakers have struggled to answer: whether existing regulatory frameworks, designed for a pre-AI world, can actually govern a technology that is evolving faster than bureaucracies can adapt. Autonomous vehicles, medical AI, generative AI content production, and AI-driven workforce displacement do not fit neatly into any single ministry’s jurisdiction. The international AI governance principles agreed upon during the 2025 G7 Hiroshima AI Process provide a multilateral foundation, but the gap between international principles and domestic implementation remains vast. Whether Japan charts a path closer to the American light-touch model or gravitates toward the EU’s more prescriptive approach will shape both its AI industry’s competitive position and its citizens’ rights and protections in the age of intelligent machines. South Korea, India, and Brazil are all developing their own AI governance frameworks, each calibrating a different balance between innovation promotion and risk mitigation. The American framework, precisely because it comes from the world’s largest AI market, will inevitably influence these deliberations. A global race to the bottom on AI regulation is one possible outcome. A convergence toward international standards, building on the G7 and OECD frameworks, is another. The outcome is far from determined, and the choices made in the next two to three years will establish path dependencies that are very difficult to reverse. (Holland & Knight: White House Releases National Policy Framework)
The framework’s greatest strength and greatest vulnerability are one and the same: flexibility. A sector-specific approach allows regulation to be tailored to the unique characteristics and risks of each domain. The FDA can apply its understanding of clinical evidence to medical AI. The SEC can leverage its market expertise to govern algorithmic trading. NHTSA can use its vehicle safety data to oversee autonomous driving systems. This is the promise, and it is not without merit. Domain expertise matters, and regulators who understand the intricacies of their sectors are better positioned to identify where AI creates genuine risks rather than merely unfamiliar ones. The danger, however, lies in the seams. When an AI system operates across multiple sectors simultaneously, which regulator takes the lead? When a large language model provides medical advice, offers financial guidance, generates news articles, produces educational content, and engages in conversation with children, all within the same platform, does oversight belong to the FDA, the SEC, the FCC, the FTC, or the Department of Education? This regulation gap problem has been identified repeatedly by researchers at the Brookings Institution and Stanford University’s Human-Centered AI Institute as the structural weakness of sector-specific approaches. General-purpose AI models resist categorization by sector because they are, by definition, not confined to any single sector. Companies like OpenAI and Google offer AI services that span healthcare, finance, education, media, entertainment, and dozens of other domains, making it essentially impossible to assign regulatory jurisdiction through a sector-based framework. The framework does not offer a clear answer to this fundamental challenge, and it is not obvious that one exists within the sector-specific paradigm it champions. The emergence of AI agents, systems capable of autonomous action across multiple platforms and services, will only intensify this jurisdictional ambiguity in the years ahead.
Not creating a new regulator is not the same as doing nothing. But the distance between a legislative blueprint and enforceable policy is vast, and the framework leaves most of that distance to be covered by a Congress that has not historically distinguished itself by the speed or coherence of its technology legislation. CSET’s analysis characterizes the framework as exactly that, a blueprint, noting that it carries no binding legal force and creates no new obligations. What matters now is whether Congress can translate these recommendations into legislation, and if so, how faithfully and how quickly. The political equation is extraordinarily complex. AI companies are spending record sums on lobbying. Consumer advocates are demanding stronger protections. National security hawks want to ensure American technological supremacy. State governments are fighting to preserve their regulatory authority. Civil rights organizations are pushing for algorithmic accountability requirements that the framework barely acknowledges. Labor unions are raising alarms about AI-driven workplace surveillance and automated management decisions that affect working conditions without human review. And all of this is unfolding against the backdrop of a technology that continues to advance at a pace that makes most legislative timelines seem almost quaint. The United States has made its bet. No new regulator. Existing agencies and industry standards will suffice. Whether that bet pays off or proves catastrophically insufficient will be determined by the race between the speed of AI development and the speed of policy response. For now, AI is winning that race by a commanding margin, and the gap shows no signs of closing. We will witness the outcome of this wager in real time, whether we are ready for it or not. The stakes could not be higher. The choices made now about AI governance will shape economies, societies, and the distribution of power for decades to come. Getting it wrong carries consequences that no regulatory sandbox can contain. (Consumer Finance Monitor: What the Framework Means and What Comes Next)
The framework arrives at a moment of profound uncertainty about where AI technology is heading and how fast it will get there. The capabilities of frontier AI models have expanded dramatically over the past two years, from sophisticated reasoning and code generation to autonomous task completion and multimodal understanding. Each new generation of models surprises even the researchers who built them. In this environment of rapid and unpredictable advancement, the choice between regulatory frameworks is not merely a policy preference. It is a bet on the future trajectory of a technology whose ultimate capabilities and risks remain genuinely unknown. The American bet is that flexibility and speed matter more than comprehensiveness and precaution. History will judge whether that wager was wise.
この記事を書いた人
灰島
30代の日本人。国際情勢・地政学・経済を日常的に読み続けている。歴史の文脈から現代を読むアプローチで、世界のニュースを考察している。専門家ではないが、誠実に、感情も交えながら書く。

コメント