This Country Will Be the First to Use AI to Predict Crimes

This Country Will Be the First to Use AI to Predict Crimes

The president of this country has established a security unit that, thanks to AI, will "predict future crimes." This initiative is raising significant concerns among human rights defenders.

Video surveillance has become so ingrained in our daily lives that we may no longer notice it. Once reliant on analog devices and human operators, surveillance cameras now increasingly exploit digital technologies that are more efficient but also more concerning. We are also witnessing the global deployment of so-called "augmented" cameras, which are used to monitor public spaces—a practice prevalent in countries such as the United States, China, the United Kingdom, Israel, Singapore, and India.

Now, far-right Argentine president Javier Milei—known for making a chainsaw the symbol of his electoral campaign—has turned his attention to this technology. According to The Guardian, on Thursday, August 1, he announced the creation of an Artificial Intelligence Unit Applied to Security (UIAAS). Its mission will be to use "machine learning algorithms" to analyze "historical crime data" with the goal of "predicting future crimes and contributing to their prevention." This system is causing significant concern among human rights advocates.

The UIAAS will be led by the Director of Cybercrime and Cyber Affairs and will include the Federal Police and Argentine Security Forces. Its mission will be "the prevention, detection, investigation, and prosecution of crime and its connections through the use of artificial intelligence."

To achieve this, the unit will "identify unusual patterns on computer networks and detect cyber threats before attacks occur"—whether from malware, phishing, or "other forms of cyberattacks." It will also "process large volumes of data from various sources to extract useful information, create suspicious profiles, or identify links between different cases." Additionally, the unit will patrol public social networks, apps, websites, and even the Dark Web "to investigate crimes, identify perpetrators, and detect situations of serious security risk." Finally, it will analyze social media activities to "detect potential threats, identify the movements of criminal groups, and anticipate unrest."

The unit will also be responsible for drone surveillance operations and will use facial recognition to enhance security measures and identify wanted persons.

However, there is a significant concern: the document outlining these initiatives does not specify the framework governing the use of AI, raising fears of potential mass surveillance. Several organizations have warned that this measure could violate human rights. "Large-scale surveillance undermines freedom of expression because it encourages people to self-censor or refrain from sharing their ideas if they suspect that anything they comment on or publish is being monitored by security forces," said Mariela Belski, executive director of Amnesty International Argentina.

The Argentine Centre for Studies on Freedom of Expression and Access to Information (CELE) has also expressed concern about "the opacity surrounding the acquisition and implementation of these technologies," noting that such technologies have previously been used to "profile academics, journalists, politicians, and activists." Another concern is the collection of personal data used to investigate or create the so-called "potential suspects" profiles. Once again, the criteria for determining what might make a person suspicious are not detailed in the resolution, leaving the door open to potential totalitarian abuses. Regarding data protection, experts worry about who will have access to the information generated by AI and how it will be handled.