Companies like Flock and Axon sell sensors – cameras, license plates, shots detectors, gunshots, drones-Then offer AI tools to give meaning to this data ocean (during last year’s conference, I saw the scenario between countless AI-For-Police startups and the chefs they sell at the exhibition). The departments say that these technologies save time, facilitate the shortages of officers and help reduce response times.
These sound like beautiful objectives, but this adoption rhythm raises an obvious question: who makes the rules here? When does the use of AI go through the efficiency of surveillance and what type of transparency is it due to the public?
In some cases, police technology fueled by AI already leads to a gap between the services and communities they serve. When Chula Vista, California police were the first In the country to obtain special derogations from the Federal Aviation Administration to pilot their drones further than normal, they said that drones would be deployed to resolve crimes and bring people to help emergencies earlier. They had a little success.
But the department was also prosecuted by a local media alleged that it had denied its promise to make drones images public, and residents said Burning drones above the head resemble an invasion of privacy. An investigation revealed that these drones were deployed more often in poor neighborhoods and for minor problems such as strong music.
Jay Stanley, analyst of the main ACLU policies, says that there is no global federal law that governs how local police services adopt technologies such as the follow -up software on which I spoke. Services generally have the room for maneuver to try it first and see how their communities react after the fact. (Veritone, who makes the tool I wrote, said they could not name or connect with the services by using it so that the details of the way it is deployed by the police is not yet clear).