Science and Tech

Law enforcement increasingly use artificial intelligence but don’t understand how it works, according to this study

Law enforcement increasingly use artificial intelligence but don't understand how it works, according to this study

While advances in artificial intelligence can be positive in helping law enforcement catch criminals, it is also a weapon that is not foolproof and that could cause ethical issues.

More and more law enforcement such as the army or the police rely on different artificial intelligence technologies to advance on the ground, or to deal with criminals, and not just hackers.

The problem with this is that many law enforcement officials do not understand exactly how artificial intelligence works, which could result in mistaken arrests or even approach missions incorrectly, and that is what this study expressly says.

so it point out various researchers at North Carolina State University, who claim that artificial intelligence is in its early teens, but that law enforcement is already integrating it as predictive surveillance, facial recognition and so on but “without understanding exactly how they work”.

We found that study participants were not familiar with AI or the limitations of AI technologies“, it states Jim Brunettstudy co-author and director of the NC State Public Safety Leadership Initiative.

This included technologies artificial intelligence that participants had used at work, such as facial recognition and gunshot detection technologies“, said.

The problem that exists, according to the researchers

Part of the problem”is the general lack of knowledge of police officers about AI capabilities and how they work“, said Ronald Dempseythe first author of the study and a former graduate student at NC State.

That makes it difficult or impossible for them to appreciate the limitations and ethical risks. May pose significant problems for both law enforcement and the publicDempsey adds.

If emerging AI technologies are well regulated and carefully implemented, a public safety asset can potentially increase community trust in policing and the criminal justice system.”, the study states.

However, study participants raised concerns about the risks of algorithm bias (diversity and representativeness challenges), the challenge of replicating the human factor of empathy, and concerns about privacy and trust.“, Add.

It is also important to understand that AI tools are not foolproof.” Dubljevic said. “AI is subject to limitations. And if law enforcement officials don’t understand those limitations, they may place more value on AI than is warranted, which can pose ethical challenges in its own right.”.

There are always dangers when law enforcement adopts technologies that were not developed with law enforcement in mind”, adds Brunet. “This certainly applies to artificial intelligence technologies, such as facial recognition. As a result, it is critical that law enforcement officials have some training in the ethical dimensions surrounding the use of these artificial intelligence technologies.”.

Source link