Currently, artificial intelligence systems are capable of generating and interpreting content on a large scale. However, using this type of systems to automatically detect content that is harmful to users or society is a challenge that has not yet been resolved. This is the objective of several researchers from the Intelligent Systems Group (GSI) of the Polytechnic University of Madrid (UPM) in Spain.
As part of the Participation research project, in which different institutions from several European countries such as the University of Roma Tre (Italy), the University of Middlesex (United Kingdom) and the Polish police collaborate, several artificial intelligence models have been developed with the objective of automatically detecting radical content that may be harmful to society. This type of content is normally propaganda aimed at Western countries that seeks to radicalize susceptible citizens and ultimately encourage them to commit violent crimes. Detecting and stopping the flow of propaganda towards Europe is one of the priorities of the European Commission, and to which large economic and technological resources are dedicated. The Participation project, endowed with almost 3 million euros, works directly with the detection of propaganda content, analyzing the language and narratives used, as well as a completely automatic detection and monitoring system.
The research carried out in this project, carried out by Patricia Alonso del Real and Oscar Araque, both from the UPM, tries to model the use of emotions and moral values to awaken feelings of radicalization in readers. In this way, artificial intelligence models are capable of understanding the emotions and moral values expressed and, based on that information, detecting whether this content is aimed at radicalizing readers. This is achieved using Natural Language Processing and Machine Learning techniques, two subfields of Artificial Intelligence.
It is difficult to locate and remove radicalizing content on the internet. The new system's mission is to automatically detect radical content on social networks and other media. (Illustration: Amazings/NCYT)
The study is titled “Contextualization of a Radical Language Detection System Through Moral Values and Emotions.” And it has been published in the academic journal IEEE Access.
“The use of this type of system is proving to be crucial when it comes to controlling harmful content, which is increasingly common on social networks and other forums,” highlights Óscar Araque, professor at the Higher Technical School of Telecommunications Engineers ( ETSIT) of the UPM. “This is a pressing need, which the European Union considers a priority for the future digital society.”
The Intelligent Systems Group, as part of its intense research activity, works on a multitude of projects in which systems based on artificial intelligence are developed, in collaboration with national and international members. An example of this is the funding received by the AMOR project, whose objective is to develop citizen assistance systems for the consumption of content in a responsible and informed manner. To do this, intelligent robots and a hologram system will be used, which will allow access to information in a new and modern way. (Source: UPM)