AI mistook Doritos bag for a gun, teen held at gunpoint

Tombstone icon

Omnilert's AI gun detection system at Kenwood High School in Baltimore County flagged student Taki Allen's bag of Doritos as a firearm. Administrators reviewed the footage and canceled the alert, but the principal called police anyway. Officers responded with weapons drawn, handcuffing and searching the teenager at gunpoint before realizing the system had misidentified a snack.

Incident Details

Severity:Facepalm
Company:Baltimore County Public Schools
Perpetrator:Vendor
Incident Date:
Blast Radius:Student detained at gunpoint; district reviewing contract and safety policies; community trust hit.

The Incident

Taki Allen was a student at Kenwood High School in Baltimore County, Maryland. In October 2025, the school's AI-powered gun detection system, provided by a vendor called Omnilert, analyzed CCTV camera feeds and flagged Allen as potentially carrying a firearm. What Allen was carrying was a bag of Doritos.

The system issued an alert to school administrators. According to reporting by Word in Black and other outlets, administrators reviewed the camera footage and canceled the alert - they looked at what the AI had flagged and determined it was a false positive. The sequence should have ended there: AI flags something, human checks it, human overrides the incorrect alert.

It didn't end there. The school's principal reportedly re-escalated the situation by contacting police directly, despite the alert having been canceled by the administrators who reviewed the footage. Police officers responded to the school with weapons drawn. Taki Allen, a Black teenager holding a bag of chips, was forced to the ground, handcuffed, and searched at gunpoint.

The Chain of Failures

Three distinct systems failed in sequence. The first was the AI itself. Omnilert's computer vision system is designed to detect the visual signature of firearms in security camera footage. It identified a bag of Doritos as a potential weapon. False positives in AI-based object detection are not rare - they are a known and expected failure mode, which is precisely why these systems are designed with human-in-the-loop verification steps.

The second failure was human. The administrators who reviewed the footage did the right thing: they looked at what the AI flagged, recognized it was wrong, and canceled the alert. But the principal apparently bypassed this decision and called law enforcement anyway. The human override that was supposed to be the safety net against false positives was itself overridden by another human who either didn't trust the cancellation or made an independent judgment that the situation warranted police involvement regardless of what the camera footage showed.

The third failure was the police response. Officers arrived and treated the situation as an active weapons threat. A student was held at gunpoint and handcuffed based on an alert that had already been canceled by the people who reviewed the source footage.

The AI Detection Market

AI gun detection systems have become a growth industry in the United States, driven by the persistent crisis of school shootings. Companies like Omnilert, ZeroEyes, and Evolv Technology sell systems that promise to detect weapons using computer vision applied to existing security camera infrastructure. The pitch to school districts is compelling: automated detection that works faster than human security guards can monitor multiple camera feeds, providing early warning before a threat develops.

The challenge is accuracy. Computer vision systems trained to detect firearms have to identify specific visual patterns - shapes, proportions, how an object is held - from camera angles that weren't optimized for weapon detection, through varying lighting conditions, with partial occlusion by clothing and bodies, at distances where a bag of chips and a compact firearm can produce similar pixel patterns.

The vendors understand this. Their system designs include verification layers. When Omnilert detects a potential weapon, the alert goes to trained operators or school administrators who are supposed to view the flagged footage and make a determination before any enforcement action is taken. The detection is the first step in a process, not a conclusion.

This design assumes the human verification step actually works as intended. In Baltimore County, it did work - the administrators correctly identified the false positive and canceled the alert. The system failed because a decision-maker outside the verification chain overrode the result.

After the Handcuffs

Three days after the incident, the principal reportedly called Taki Allen to "check in." Baltimore County Public Schools said it would "look into" what happened. The district had recently dismantled the internal department that might have provided oversight for incidents involving student safety and disciplinary procedures.

The response drew criticism from community organizations. Associated Black Charities (ABC), led by Chrissy M. Thornton, investigated the incident and reported uncovering additional information about the Kenwood High School principal that the district characterized as a "personnel matter" - effectively declining to discuss it publicly.

The ACLU covered the incident as part of its ongoing work on AI surveillance and civil liberties, framing it within a broader pattern of AI-powered systems in schools that subject students to surveillance without adequate safeguards against false positives and disproportionate enforcement.

The Racial Dimension

Word in Black's coverage of the incident drew a direct comparison to Trayvon Martin, the 17-year-old shot and killed in 2012 while carrying Skittles and an iced tea. The parallel was pointed: a Black teenager carrying a snack, treated as a lethal threat by people who made assumptions about what they were seeing.

AI gun detection systems process visual information without racial bias in the narrow technical sense - the algorithm is looking for weapon-shaped objects, not making decisions based on the race of the person holding them. But the system's outputs feed into a human decision-making chain that is not bias-free. The principal who re-escalated the canceled alert made a judgment call. The police who responded with weapons drawn made tactical decisions. At each step where a human exercised discretion, the cumulative effect of the system was a Black teenager on the ground in handcuffs because an algorithm thought his chips were a gun.

The incident also raised questions about how disproportionately AI surveillance systems are deployed. Schools in communities with higher proportions of minority students are more likely to adopt AI security systems. The same systems in suburban schools with different demographic profiles might produce identical false positives - but whether those false positives lead to students being held at gunpoint depends on decisions made by the humans downstream from the algorithm.

The False Positive Problem

Every AI detection system has a false positive rate. The question for deployment is what happens when a false positive occurs. In medical screening, a false positive means additional testing. In spam filtering, a false positive means a legitimate email goes to the junk folder. In school gun detection, a false positive can mean a child is held at gunpoint by police.

The consequences of false positives scale with the severity of the response protocol. A detection system that sends a quiet notification to a security officer who checks a camera feed has a low-stakes false positive. The same system connected to a protocol that dispatches armed police has a life-threatening false positive. The technology is identical. The stakes are determined by what humans decide to do with the output.

Omnilert's system is designed with verification steps that are supposed to prevent false positives from reaching the armed-response stage. In this case, the verification worked correctly - the administrators canceled the alert. But the system's safety design couldn't account for a school administrator deciding to call the police after the alert was already cleared.

The Accountability Gap

Baltimore County Public Schools found itself in an awkward position. The district had purchased an AI gun detection system to protect students. That system's false positive, combined with human decisions that bypassed the system's intended safeguards, resulted in a student being traumatized at gunpoint.

The district's response - saying it would "look into" the incident and treating the principal's actions as a personnel matter - left the fundamental questions unanswered. Should the AI vendor be held accountable for the false positive? Should the principal be held accountable for re-escalating a canceled alert? Should the district be held accountable for deploying a system without protocols that constrain what happens after an alert is canceled?

The AI gun detection industry's answer is that their technology is a tool, and tools are only as good as the humans using them. The administrators who canceled the alert used the tool correctly. The principal who called the police after the cancellation did not follow the intended workflow. From the vendor's perspective, the system worked as designed - a human just ignored it.

For Taki Allen, the distinction between a technology failure and a human failure was irrelevant. He was on the ground in handcuffs because a school decided to let an AI watch him through security cameras, and when that AI made a mistake, the adults in charge didn't protect him.

Discussion