CONTACT US

Contact Form

    News Details

    Why government-mandated AI surveillance threatens liberty
    • March 12, 2026

    The breakdown between the Department of War and Anthropic over the use of AI for domestic mass surveillance captured the public’s attention for a simple reason: people fear that this incredible technology could introduce a world in which every move is tracked, every AI prompt is logged, and every activity is monitored. They have good cause to be concerned. AI can accelerate and improve each stage of surveillance: aggregating data from more sources, analyzing that data with greater speed, and asserting more accurate and comprehensive reviews of a person’s life.

    It’s no surprise then that people from across the political aisle got involved in what was effectively a contract dispute between the DOW and this leading AI lab. The American people remain fiercely defensive of their fundamental liberties: to freely associate with one another, to express their ideas without fear of punishment, and to seek out information that may seem odd or even dangerous to others. That’s why it’s particularly concerning that many of those who spoke out against AI for surveillance have remained quiet or even supported surveillance of AI.

    Surveillance of AI involves private entities being compelled by the government to closely observe users. In some instances, states have mandated that AI labs even notify third parties of user behavior. California, for example, mandates that AI companies refer certain users to crisis service providers if their prompts and questions relate to suicidal ideation, suicide, or self-harm. Other states are weighing similar legislation. Some proposals would go so far as to inform parents of troubling behavior exhibited by teenage users. There is no way for companies to comply with such laws without creating mechanisms to more closely surveil users.

    The same howls of outrage generated by the possibility of the government using AI to surveil should be raised in opposition to such laws because the same values and principles are at stake. In both cases, there’s a strong case to be made as to why increased monitoring is warranted. Surveillance by the government is an effective means to stop crime. Surveillance of AI at the direction of the government is a sure way to clamp down on anti-social and self-destructive behavior. There will also be some justification for why the benefits derived from keeping a close tab on others are better for the common good. Yet, our founders made the decision that the general welfare cannot be raised as a defense to infringe on certain core liberties unless in extraordinary circumstances.

    Our Constitution is a bulwark against invasions of privacy. A series of barriers seek to maintain a space of individual freedom for every American to test ideas, to experiment with novel behaviors, and to engage in self-discovery. Demanding AI labs pick up a hammer and chip away at those protections is an affront to those core constitutional values in the same way it would be to allow the government to peek through those walls with AI.

    The concern animating surveillance-of-AI legislation is not cynical or misguided. Parents have buried children. Families have lost people who, in their final weeks, were apparently asking questions of AI systems that no human ever knew about. The grief driving these proposals is real, and the intuition behind them — that someone should have known, that something could have been done — is a deeply human response to tragedy. Families and the public are right to want tools that can identify a child in crisis before that crisis becomes irreversible. They are right to believe that technology capable of detecting distress signals should, in some form, be deployed in service of life.

    The question is not whether such tools should exist. They should, and in many cases they already do. Parents can choose AI platforms built with parental oversight features. They can select services that offer transparency into usage, flag concerning patterns, or allow for open conversations between parent and child about what information they’re seeking and why. The market for AI tools that align with a family’s particular comfort level with monitoring and reporting is expanding rapidly. A parent who wants a watchful system can find one. A teenager who needs a private space to work through difficult questions can find that too. That layered, voluntary ecosystem is not a gap in protection — it is protection, of a kind that preserves both safety and dignity.

    What the California model and its imitators would replace that ecosystem with is something categorically different: a government-scripted surveillance architecture in which the state decides what thoughts warrant intervention, what behavior triggers notification, and who gets told. The moment that determination shifts from a family’s informed choice to a legislature’s blunt mandate, the constitutional concern crystallizes. It is no longer a parent and child negotiating the terms of trust. It is the government deputizing private companies to monitor expression and report it upward through channels the user never agreed to and cannot opt out of.

    The liberties our Constitution protects were never meant to be convenient. They were designed to hold even when — especially when — the case for overriding them feels urgent and compassionate. We can build a world in which families are empowered, in which AI companies compete on how thoughtfully they handle vulnerable users, and in which the government’s role is to inform rather than to mandate and monitor. What we should resist, with the same energy we brought to the Department of War controversy, is allowing the emotional weight of tragedy to become the lever by which the state pries open a space that belongs to individuals and families alone.

    Kevin Frazier helps lead the AI Innovation and Law program at the University of Texas School of Law while also serving as a Senior Fellow at the Abundance Institute.

    ​ Orange County Register 

    News