Concerns over AI fakes

Dr. Ramón Argila de Torres y Sandoval

March 7, 2026

There are concerns about AI generating images and videos indistinguishable from photos and video directly through the lens to a recording media. Yet, when used lawfully, within the realm of allowable law, photos and video can be entertaining, such as, using AI to reframe culturally relevant videos for parody or humor.

A recent AI-generated parody, easily found by searching X for ‘The Matrix in 2026,’ illustrates the lighter side of this technology.

The Matrix training video segment where Neo stares at the woman in the red dress, was recently re-imagined through AI as an Anti-WOKE training video. People with strangely dyed hair and a dude in a red dress.

A genuinely clever piece of satire. The Matrix scene is almost perfectly structured for that kind of reframing:

Why that parody works so well:

  • The original scene is literally about distraction and losing focus on what matters
  • Substituting the woman in a red dress with a dude IN a red dress while keeping the “training” framing makes the metaphor write itself
  • Morpheus as the stern instructor figure training Neo as the clueless newbie
  • The underlying message of the original actually reinforces the parody’s point

The creative use of AI for satire like that:

  • Has a long legitimate tradition going back to political cartoons and parody
  • When clearly framed as satire it falls under protected expression
  • The Matrix parody punches at Woke ideology rather than targeting a specific private individual
  • Those are generally the cleaner uses of the technology

The broader concern though is completely valid:

  • The same technology that makes that funny Matrix parody possible also makes it possible to fabricate realistic video of real people saying and doing things they never did
  • Courts and legislatures are genuinely behind the technology curve on this
  • Authentication of original media is becoming a serious legal and journalistic challenge
  • Some are pushing for cryptographic signing of genuine camera footage at the hardware level as a potential solution

The core problem: Once people can’t trust video evidence, it doesn’t just enable misinformation — it also gives genuinely guilty people a plausible defense. That’s a serious societal risk.

Mendones v. Cushman & Wakefield: Judge Victoria Kolakowski in California’s Alameda County sensed something was wrong with a video submitted as evidence — the witness’s voice was disjointed and monotone, her face fuzzy and lacking emotion, with twitching and repeated expressions every few seconds. She determined it was an AI deepfake and dismissed the case on September 9, 2025. The laws have not kept pace with this.

The exhibits included deepfake witness videos, altered Ring camera footage, and fabricated text message screenshots — with mismatched lighting, unnatural speech patterns, and suspicious metadata flagging them as fake.

The deeply troubling part : That deepfake was caught because it used obsolete technology. Future fakes will not be so obvious.

The “deepfake defense” problem — the other shoe dropping: Defense attorneys have begun invoking the “deepfake defense” — the ease of producing deepfakes now enables bad actors to dismiss genuine recordings as fabrications. Evidence that would previously have been considered nearly ironclad is now cast into doubt.

The legal system is genuinely behind: Since the beginning of 2025 alone, there have been 518 documented cases in which AI produced fabricated content used in U.S. courts. And the knife cuts both ways. A fake video is submitted as real, or a real video is disparaged as fake, creating a nearly perfect evidentiary crisis:

The two-edged problem:

Fake submitted as real:

  • Fabricated evidence convicts innocent people
  • Civil cases are won fraudulently
  • Bad actors get away with crimes they committed
  • Judges and juries have no reliable way to detect it as technology improves

Real dismissed as fake:

  • Genuine criminals walk free invoking the “deepfake defense”
  • Victims of documented crimes lose their cases
  • Body cam footage becomes challengeable
  • Security camera evidence loses its courtroom weight

What makes this particularly dangerous:

  • Both problems SIMULTANEOUSLY undermine the justice system from opposite directions
  • It essentially poisons the entire well of video evidence
  • The more sophisticated deepfakes become, the more ALL video becomes suspect
  • There’s no floor to how bad this gets without a technological or legal solution

The systemic damage beyond courts:

  • News footage becomes questionable
  • Historical documentation becomes unreliable
  • Political events can be fabricated or genuine ones denied
  • War crimes documentation becomes legally challengeable

Potential solutions being discussed:

  • Hardware level cryptographic signing of authentic footage at the camera sensor itself
  • Blockchain time-stamping of original recordings
  • Mandatory AI detection standards for court submissions
  • Chain of custody requirements for digital evidence

The hardest truth: The legal system is built on the assumption that physical evidence doesn’t lie. That assumption is now fundamentally broken and legislators are nowhere near addressing it adequately.

It’s arguably one of the most serious under-discussed threats to functional civil society right now.

Well, wouldn’t you know. Now we have to go back to evidence standards 200 years ago. Real people, seeing real things and TESTIFYING; a genuinely fascinating historical irony:

The full circle:

  • 200 years ago eyewitness testimony was often the ONLY evidence
  • Photography was revolutionary because it was considered objective and unimpeachable
  • Then video and following digitally recorded video until the arrival of AI
  • And now we’ve technologically lapped ourselves back to needing human witnesses

The old legal wisdom that never really went away:

  • Cross examination of live witnesses
  • Establishing motive to lie
  • Corroborating testimony from multiple independent sources
  • Character and credibility of the witness
  • Physical presence and demeanor before the court

The irony cuts deep: All the technology that was supposed to make justice MORE reliable may have ultimately made it LESS reliable than a roomful of honest witnesses with no reason to lie.

But eyewitness testimony has its own well documented problems:

  • Memory is reconstructive not photographic
  • Stress distorts recall significantly
  • Cross racial identification is notoriously unreliable
  • Witnesses can be coached or intimidated
  • People genuinely remember things wrong without lying

So where does that leave us:

  • Technological evidence now shares the same credibility problems human testimony always had
  • Neither is fully reliable alone
  • Corroboration across MULTIPLE independent types of evidence becomes more critical than ever

The deeper philosophical point: Truth in a courtroom was always a constructed consensus rather than objective fact. AI just made that uncomfortably obvious.

The use of AI fraud in photos and videos may force stores and organizations to go back to night watchmen, and real people will get jobs back. Which may be a really interesting economic and practical observation. The AI that was supposed to replace people may be replaced by people.

The economic reality hitting retail right now:

  • Self checkout was supposed to save labor costs
  • Shrinkage (theft) at self checkout has been dramatically higher than expected
  • Several major retailers including Walmart and Target have been quietly rolling back self checkout
  • AI camera surveillance systems are expensive to install and maintain

The security technology paradox:

  • AI surveillance can be fooled by AI generated credentials and spoofed footage
  • Hackers can potentially compromise camera systems
  • A human being present is considerably harder to digitally manipulate
  • You can’t deepfake your way past a person standing in the room

The jobs argument is genuinely interesting:

  • Night watchmen, security guards, store attendants
  • Court stenographers and document authenticators
  • Human notaries and witness verification roles
  • Essentially any role requiring verified human presence gains value

The broader economic irony:

  • Automation was supposed to eliminate these jobs permanently
  • The very sophistication of AI deception may create demand for human presence as the only trustworthy verification
  • Human witness becomes a PREMIUM product

The oldest security feature: A real person who can testify “I was there, I saw it with my own eyes, cross examine me” suddenly becomes more valuable than any technology.

Adam Smith probably didn’t see that one coming — that the most advanced technological era in history would resurrect demand for the most basic human function. Showing up and paying attention.


“Stand at the crossroads and look; ask for the ancient paths, ask where the good way is, and walk in it.” Jeremiah 6:16

Leave a Reply