Can Artificial Intelligence Detect Crime Before It Happens?

Can Artificial Intelligence Detect Crime Before It Happens?Pre-crime sounds like something out of George Orwell’s 1984, a glimpse at a society ruled by a tyrannical government that surveils its citizens in their homes and on the street. The language of its people is altered in order to prevent talk of revolt and everyone is on high alert due to a perceived clear and present danger that a neighbor may well be working with the country’s sworn enemy. Seemingly normal behavior is seen as a sign of defiance, and that defiance is punishable as a pre-crime. This is also one of the central ideas behind the Minority Report, a movie (based on a story by Philip K. Dick) about a world of tomorrow where crime is prevented by acting on predictions of future events.

Imagine taking a ride on the subway when suddenly you’re struck with a terrible headache, or your stomach begins to rumble. You may look around frantically in an attempt to find an exit to your terminal. You may even break out in a sweat or appear nervous. This is a common sign, even in people with minor social anxiety. What if I were to tell you that an artificial intelligence connected to a series of surveillance cameras could alert the authorities to your unusual behavior?

There is, and it’s being deployed in the US right now. San Francisco is ready to install 288 of these crime-predicting cameras around its subway system, which will then be empowered to text (or call) guards and even shut down trains if anomalous behavior is uncovered.

The Republican National Convention in Tampa, FL will also be closely watched by these electronic psychics later this month.

Is this surveillance out of control? Does anyone have any privacy anymore? The reasons behind this type of heightened security range from accident avoidance to potential terrorist attack. Still, if we’re going to trust computers to determine who is and isn’t suspicious, then we’ve lost the greater battle here.

Artificial intelligence, like anything, can be fooled. It can miss things, despite being “trained” by its programmers. This lures human security into a false sense of… security. It’s easy to overlook what you can see because the safety net of an advanced artificial brain calculating facial expressions and tracking objects is on the job.

I’ve worked in security before at one of the largest computer manufacturing facilities in the US. All the processes and standards in the world amount to nothing against intuition and common sense. Unfortunately, you can’t teach common sense to a computer. You can only tell it that, if eyebrows are furrowed in this way, or the body heat of a passenger escalates to this point, there may (or may not) be cause for alarm.

Homeland Security is estimating that money currently being spent on artificial intelligence units such as these will quadruple in the next four years. That makes it a multi-billion dollar industry. Someone makes a lot of money off this, and until I see stories showing that these programs have actually saved lives, I’m not a big fan of them. That money could go into other public works projects like hiring people to make sure things are safe, or improving the conditions of the subway systems in the first place.

If you ask me, this type of technology isn’t ready for prime time. I feel sorry for anyone that undergoes questioning or search because a machine said they look suspicious. Isn’t it bad enough that we’re being tracked in the first place?

CCTV Camera by Colin Russell

Article Written by

Ryan Matthew Pierson has worked as a broadcaster, writer, and producer for media outlets ranging from local radio stations to internationally syndicated programs. His experience includes every aspect of media production. He has over a decade of experience in terrestrial radio, Internet multimedia, and commercial video production.