Subscribe
Todd Wasserman

May 7th 2018

Evolution of the Intelligence Community: Smartphones, AI and Predictive Policing

TwitterRedditLinkedInFacebookPinterestEmailPrint

The intelligence community is wielding a new weapon against criminals: the smartphone. In New York City, all 35,000 officers carry smartphones linked to Domain Awareness System (DAS), a $40-million computer-based awareness system developed with Microsoft that gives cops access to live video feeds around the city from some 8,000 cameras.

The NYPD has been improving DAS, adding analytics, pattern recognition and predictive policing functions since the system was created in 2008. In 2016, New York news channel NY1 reported that DAS helped cops from the 73rd Precinct make arrests. The cops got an alert from DAS that shots had been fired at an address in Brooklyn. The officers discovered bullet casings on the roof. Using their phones to access DAS, they found a woman in the building had an outstanding arrest warrant. After getting a search warrant via their phones, they found two guns in the woman’s apartment and made three arrests.

The New York program was inspired by one in London. Other police departments in Chicago and Los Angeles are using similar data-based predictive analysis tools, and the intelligence community is employing other AI-based technologies like facial recognition to solve crimes. The use of such technology has sparked a predictable outcry about privacy and the potential misuses of such technology.

Who’s Using What?

The NYPD’s network of 8,000 surveillance cameras pales in comparison to the U.K.’s 6 million. London is probably the most heavily surveilled place on Earth, with one camera for every 10 people. U.K. police are also applying AI in new ways. For instance, in the city of Durham, police are using an AI tool designed to figure out whether a detainee should be kept in custody based on their perceived likelihood to commit more crimes. London police also use a facial recognition system that matches faces in crowds to a database of 2.9 million images of criminals.

In the U.S., the intelligence community also makes use of AI. The FBI, for instance, uses image recognition to identify criminals by their tattoos. In addition to the New York and Chicago police departments, some 60 local police departments employ technology from PredPol, which forecasts where crimes might occur based on past data. The LAPD, meanwhile, uses AI for predictive analysis about where crimes are likely to occur. If there’s a string of robberies, for instance, then the LAPD will alert patrols to keep a close eye on nearby neighborhoods where the next robberies might happen.

Fears of “Minority Report”

Sci-fi authors have explored the idea of preemptive policing, most notably in Philip K. Dick’s “Minority Report,” which imagined the technique being used to frame the protagonist, John Anderton. Such fears were revived with a report from Stanford University looking at artificial intelligence and life in 2030, by which time it predicted predictive policing would be the norm.

Critics are likely to continue to take aim at AI surveillance tools on privacy grounds. The Electronic Frontier Foundation, for instance, has argued that police and elected officials can use such tools to spy on critics and political opponents. The ACLU has reasoned that surveillance can have a chilling effect on public life, making citizens less free since their every action is being filmed.

Citizens have always had to weigh liberty and security. Different societies have arrived — and will continue to arrive — at different answers.

Popular