PERSPECTIVE: AI Is Not a Witness - HSToday
- Erick Grau

- 18 hours ago
- 3 min read

AI can assist. It cannot testify.
Artificial intelligence has earned a seat at the table in government, healthcare, cybersecurity, and law enforcement. But it does not belong in the witness chair.
A recent federal ruling highlighted a troubling misuse of generative AI: law enforcement officers relying on tools like ChatGPT to draft use-of-force reports. The judge’s response was blunt. AI-generated narratives introduced factual inaccuracies that contradicted body-camera footage, ultimately undermining the credibility of the reports themselves. The message was clear: efficiency does not excuse distortion.
This moment matters far beyond one courtroom.
What went wrong
Generative AI does not observe events. It does not perceive intent. It does not remember reality. It predicts language.
In the ICE case referenced by HSToday, agents fed limited prompts and images into an AI system and accepted the resulting narrative as documentation. The AI filled gaps with statistically plausible language, not verified truth. From a technical standpoint, this outcome was entirely predictable.
From a legal and ethical standpoint, it was unacceptable.
AI systems are trained to sound confident, coherent, and complete. That makes them powerful drafting tools — and dangerous narrative engines when facts are incomplete or ambiguous.
A witness recounts what they saw.
An AI reconstructs what sounds right.
Those are not the same thing.
The credibility problem
Law enforcement, courts, and regulatory bodies operate on trust. Reports are assumed to be authored by humans who can be questioned, challenged, and held accountable.
When AI generates the narrative:
Who is accountable for inaccuracies?
How do you cross-examine a language model?
How do you distinguish observation from inference?
You can’t. And juries know it.
Once AI authorship enters the evidentiary chain without strict controls, every document becomes suspect. Not because AI is malicious — but because it is indifferent to truth unless explicitly constrained and verified.
This is not an anti-AI argument
As someone who builds and deploys AI systems professionally, I’ll say this plainly: banning AI from government workflows would be shortsighted.
But misusing AI is worse.
AI excels at:
Summarization of verified facts
Pattern detection across large datasets
Drafting after human validation
Administrative efficiency
AI fails at:
First-person factual testimony
Contextual judgment without full data
Ethical interpretation
Accountability
The problem isn’t the technology. It’s the role we assign it.
Guardrails that actually work
If agencies insist on using AI in documentation workflows, several non-negotiables must be in place:
1. Human-first authorship
AI may assist with formatting or summarization, but the factual narrative must originate from a human who directly observed the event.
2. Explicit labeling
Any AI-assisted content should be clearly marked. Hidden AI authorship is a credibility time bomb.
3. Mandatory verification
AI outputs must be cross-checked against primary sources — body cam footage, logs, timestamps — before submission.
4. Training that goes beyond “how to prompt”
Personnel need to understand how AI fails, not just how to use it. Over-trust is the real risk.
5. Policy clarity
Agencies need written policy defining where AI is allowed, where it is prohibited, and who is accountable when errors occur.
The bigger picture
This ruling is not just about law enforcement. It’s a warning to every industry rushing AI into high-stakes workflows without governance.
Healthcare. Finance. Security. Education.
AI can accelerate work — but it cannot replace responsibility.
When we confuse language fluency with truth, we don’t just risk errors. We erode trust in systems that depend on credibility to function at all.
Final thought
AI is a powerful assistant.
It is not a witness.
It is not an authority.
And it is never a substitute for human accountability.
The organizations that get this right won’t be the ones moving fastest. They’ll be the ones thinking most clearly about where AI belongs — and where it never should.
Primary Source:
HSToday – Perspective: AI Is Not a Witness
If you want, I can also:
Tighten this into an op-ed format
Adapt it for a CIO / CISO audience
Reframe it for healthcare, courts, or cybersecurity readers





