Close Menu
    Trending
    • Pentagon Requests $54 Billion For AI War
    • Clavicular Hit With New YouTube Crackdown
    • Beijing’s new supply chain rules deepen concerns for US firms in China
    • India denounces ‘hellhole’ remark shared by Trump | Donald Trump News
    • New photos of Mike Vrabel and Dianna Russini emerge
    • AI search demands a new audience playbook
    • How do earthquakes end? A seismic ‘stop sign’ could help predict earthquake risk
    • Trump Announces Cease-Fire Between Israel and Lebanon
    Benjamin Franklin Institute
    Friday, April 24
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Technology»Don’t Regulate AI Models. Regulate AI Use
    Technology

    Don’t Regulate AI Models. Regulate AI Use

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteFebruary 2, 2026No Comments5 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link
    Hazardous dual-use functions (for example, tools to fabricate biometric voiceprints to defeat authentication).
    Regulatory adherence: confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.

    Close the loop at real-world choke points

    AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions, and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).

    For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and postincident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.

    This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.

    The E.U. approach: How this aligns, where it differs

    This framework aligns with the E.U. AI Act in two important ways. First, it centers risk at the point of impact: The act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with life-cycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the United States differs in three key ways:

    First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.

    Second, the E.U. can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated-sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and postincident accountability can be required without pretending we can “contain” software. They also span the many specialized U.S. agencies that may not be able to write higher-level rules broad enough to affect the whole AI ecosystem. Instead, the U.S. should regulate AI service choke points more explicitly than Europe does, to accommodate the different shape of its government and public administration.

    Third, the U.S. should add an explicit “dual-use hazard” tier. The E.U. AI Act is primarily a fundamental-rights and product-safety regime. The United States also has a national-security reality: Certain capabilities are dangerous because they scale harm (biosecurity, cyberoffense, mass fraud). A coherent U.S. framework should name that category and regulate it directly, rather than trying to fit it into generic “frontier model” licensing.

    China’s approach: What to reuse, what to avoid

    China has built a layered regime for public-facing AI. The “deep synthesis” rules (effective 10 January 2023) require conspicuous labeling of synthetic media and place duties on providers and platforms. The Interim Measures for Generative AI (effective 15 August 2023) add registration and governance obligations for services offered to the public. Enforcement leverages platform control and algorithm filing systems.

    The United States should not copy China’s state-directed control of AI viewpoints or information management; it is incompatible with U.S. values and would not survive U.S. constitutional scrutiny. The licensing of model publication is brittle in practice and, in the United States, likely an unconstitutional form of censorship.

    But we can borrow two practical ideas from China. First, we should ensure trustworthy provenance and traceability for synthetic media. This involves mandatory labeling and provenance forensic tools. They give legitimate creators and platforms a reliable way to prove origin and integrity. When it is quick to check authenticity at scale, attackers lose the advantage of cheap copies or deepfakes and defenders regain time to detect, triage, and respond. Second, we should require operators to file their methods and risk controls with regulators for public-facing, high-risk services, like we do for other safety-critical projects. This should include due-process and transparency safeguards appropriate to liberal democracies along with clear responsibility for safety measures, data protection, and incident handling, especially for systems designed to manipulate emotions or build dependency, which already include gaming, role-playing, and associated applications.

    A pragmatic approach

    We cannot meaningfully regulate the development of AI in a world where artifacts copy in near real time and research flows fluidly across borders. But we can keep unvetted systems out of hospitals, payment systems, and critical infrastructure by regulating uses, not models; enforcing at choke points; and applying obligations that scale with risk.

    Done right, this approach harmonizes with the E.U.’s outcome-oriented framework, channels U.S. federal and state innovation into a coherent baseline, and reuses China’s useful distribution-level controls while rejecting speech-restrictive licensing. We can write rules that protect people and that still promote robust AI innovation.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Technology

    How This Former Roboticist’s Students Rebuilt ENIAC

    April 23, 2026
    Technology

    How AI Is Changing Cybersecurity

    April 23, 2026
    Technology

    Ham Radio Brings Teletext Back to Life

    April 22, 2026
    Technology

    Energy in Motion: Unlocking the Interconnected Grid of Tomorrow

    April 22, 2026
    Technology

    Tech Life – A hologram to remember: Pam and Bill’s love story

    April 21, 2026
    Technology

    Engineering Manager Vs IC: How to Choose With Clarity

    April 21, 2026
    Editors Picks

    The weird rules of temperature get even stranger in the quantum realm

    February 4, 2026

    Tom Cruise Accused Of Having ‘Play-Doh Face’ By Film Critic

    July 2, 2025

    Sydney police deploy pepper spray as Israeli president’s visit sparks protests

    February 9, 2026

    How much do you get on PIP and how much will you get?

    July 2, 2025

    Who is Mojtaba Khamenei, Iran’s new supreme leader?

    March 9, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Pentagon Requests $54 Billion For AI War

    April 24, 2026

    Clavicular Hit With New YouTube Crackdown

    April 24, 2026

    Beijing’s new supply chain rules deepen concerns for US firms in China

    April 24, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.