Advertisement
News

Citizen Is Using AI to Generate Crime Alerts With No Human Review. It’s Making a Lot of Mistakes

Three sources described how AI is writing alerts for Citizen and broadcasting them without prior human review. In one case AI mistranslated “motor vehicle accident” to “murder vehicle accident.”
Citizen Is Using AI to Generate Crime Alerts With No Human Review. It’s Making a Lot of Mistakes
Image: Citizen website.

Crime-awareness app Citizen is using AI to write alerts that go live on the platform without any prior human review, leading to factual inaccuracies, the publication of gory details about crimes, and the exposure of sensitive data such as peoples’ license plates and names, 404 Media has learned.

The news comes as Citizen recently laid off more than a dozen unionized employees, with some sources believing the firings are related to Citizen’s increased use of AI and the shifting of some tasks to overseas workers. It also comes as New York City enters a more formal partnership with the app.

💡
Do you know anything else about how Citizen or others are using AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

“Speed was the name of the game,” one source told 404 Media. “The AI was capturing, packaging, and shipping out an initial notification without our initial input. It was then our job to go in and add context from subsequent clips or, in instances where privacy was compromised, go in and edit that information out,” they added, meaning after the alert had already been pushed out to Citizen’s users.

Sign up for free access to this post

Free members get access to posts like this one along with an email round-up of our week's stories.
Subscribe
Advertisement