When Securus Technologies, the telecom giant serving over 3,500 correctional facilities, announced it was piloting an AI model to scan inmate communications for signs of planned crimes, the headline was seductive: technology preventing violence. The truth, however, reveals a system less concerned with stopping crime and more focused on perfecting a new form of institutional surveillance. This isn't a story about predictive policing; it's about predictive control.
The Surveillance Engine, Not the Crime Stopper
According to MIT Technology Review, Securus president Kevin Elder explained the company began building its AI tools by training models on "years" of inmates' phone and video calls. This dataset, comprising millions of hours of conversations between incarcerated people and their families, lawyers, and friends, forms the foundation of a system now being used to scan calls, texts, and emails in real-time.
The immediate assumption is that this AI acts as a digital sentinel, listening for keywords like "weapon" or "escape." The reality is more nuanced and ethically fraught. The model isn't just looking for explicit threats; it's analyzing patterns of speech, emotional tone, relationship dynamics, and conversational context gleaned from its training on a captive population's most private moments. The goal isn't merely interception; it's behavioral forecasting based on a massive, involuntary human experiment.
Why This Matters Beyond Prison Walls
You should care because the carceral system has historically served as a testing ground for surveillance technologies that later migrate to the general public. From facial recognition to location tracking, tools perfected in environments with diminished constitutional protections often find their way into broader society. The AI developed by Securus doesn't exist in a vacuum. It represents a significant leap in passive, always-on monitoring of human communication.
The implications are profound. This technology operates on a foundational misconception: that future criminal intent can be reliably decoded from speech patterns. This ignores the reality of human communicationāsarcasm, metaphor, venting, and hypothetical planning are all part of normal discourse. An AI trained on a dataset from a traumatized, stressed, and controlled population is likely to pathologize ordinary emotional expression, leading to false flags and unjust consequences.
How the System Actually Works: Data, Bias, and Control
The technical process is a masterclass in leveraging asymmetric power for data collection. Securus, as the sole communication provider for a vast prison network, has a monopoly on a uniquely vulnerable data stream. Inmates and their contacts have no meaningful alternative and limited ability to withhold consent. Every plea to a child, every whispered conversation with a spouse, every legal discussion becomes fodder for the training algorithm.
This creates an inherent bias problem of catastrophic proportions. The model is trained on data from a population that is disproportionately poor, Black, and Brownāa direct result of systemic biases in policing and sentencing. An AI learning from this data will inevitably encode those societal biases, potentially flagging cultural speech patterns or community-specific language as "suspicious." It risks automating and amplifying the very discrimination that fills prisons in the first place.
Furthermore, the system's "success" metrics are dangerously opaque. Does it "prevent crime" by prompting a shakedown that finds a contraband item? Or does it simply increase the rate of disciplinary infractions, extending sentences and tightening control? The lack of transparent, independent auditing means the primary outcome may not be public safety, but increased institutional management and revenue for a private company.
The Chilling Effect on Rehabilitation and Justice
The most pernicious impact may be on the core goals of rehabilitation. Meaningful reintegration requires maintaining community tiesāthe very bonds this surveillance system now monitors and potentially penalizes. If an inmate fears that an anxious conversation about post-release struggles could be flagged as "planning," they will self-censor. This severs a lifeline to the outside world, increasing isolation and damaging mental health, factors known to heighten recidivism risk.
It also threatens attorney-client privilege, a cornerstone of justice. While Securus claims to exclude legally protected calls, the mere presence of an always-listening AI creates a chilling atmosphere. The knowledge that a powerful, opaque algorithm is processing all communications fundamentally alters the nature of conversation, turning every call into a performance for the machine.
What's Next: The Normalization of Predictive Surveillance
The pilot by Securus is not an endpoint; it's a starting gun. The logical next steps are clear and alarming:
- Expansion of Scope: The technology will likely be marketed to probation and parole systems, monitoring people who have returned to society.
- Justification for Broader Use: Success in prisons (however defined) will be used to argue for similar monitoring in schools, public housing, or protest movements under banners of "prevention."
- The Data Feedback Loop: Every flagged call becomes new training data, refining the system's biases in a closed loop with no public oversight.
- Commercialization: The underlying AI could be licensed to other governments or private security firms, creating a lucrative market in behavioral prediction.
The fundamental question we must ask is not whether this AI can predict crime, but what kind of society we build when we treat human communication primarily as a source of risk to be managed. The Securus system reframes the act of talkingāof maintaining love, hope, and connection under brutal circumstancesāas data points in a security algorithm.
The Real Takeaway: Surveillance in the Guise of Safety
The story of Securus's AI is a powerful reminder that technology is never neutral. It is shaped by the values and incentives of its creators. In this case, a for-profit company with a captive audience has built a tool that offers prisons unprecedented insight into inmate lives, all under the morally unimpeachable banner of "preventing crime."
But prevention is a myth when the system is designed for control. True crime prevention involves addressing poverty, addiction, lack of education, and trauma. It requires investment in communities, not just investment in monitoring them. This AI model offers a technological shortcut that bypasses those hard societal choices, instead opting for a panopticon where every word is a potential violation.
The call to action is clear: demand transparency, rigorous independent audit, and strict legislative boundaries for such systems. Question the easy narrative of tech-as-savior. Understand that the most dangerous applications of AI are often those wrapped in the most virtuous promises. The reality of AI crime prediction is here, and it has less to do with justice than with the quiet, relentless expansion of watchful power.
š¬ Discussion
Add a Comment