Science and Tech

Analyzing system vulnerabilities with artificial intelligence

[Img #72706]

A new cybersecurity project focuses on analyzing the vulnerabilities of systems that incorporate artificial intelligence and the mechanisms to make these systems more secure.

In this project, called SPRINT (Security and Privacy in Systems with Artificial Intelligence), experts from the Valencian University Institute for Research in Artificial Intelligence (VRAIN) of the Polytechnic University of Valencia (UPV) in Spain are working.

The project addresses three sources of vulnerabilities in systems with artificial intelligence.

Firstly, the security and privacy of the data, both the data used by artificial intelligence models and the capabilities provided by these models and in which third-party data could be involved.

Secondly, the vulnerabilities of the system itself that can cause it to not work properly.

And thirdly, verify that the systems that use artificial intelligence are understandable, both for users, operators and developers, so that they can provide the user with the appropriate context to use them correctly.

As the main researcher of this project at VRAIN and University Professor at the UPV, José Such, explains, “artificial intelligence is having an increasingly greater level of progress and impact on society. Many of the systems we use today bring with them some artificial intelligence model to offer advanced features that were not available before.”

And he adds that this massive use of artificial intelligence in systems has many benefits in terms of functionality and comfort, to do, in many cases, complex or repetitive tasks easily and very quickly. “However, as artificial intelligence is introduced, we are introducing new attack vectors, as is almost always the case when new technologies are introduced.” Unfortunately, as Such warns, the vast majority of artificial intelligence techniques were initially developed without considering that criminals could attack them or take undue advantage of them when they are incorporated into systems.

Members of the research team. (Photo: Polytechnic University of Valencia)

The results of the SPRINT project will allow us to know what vulnerabilities are introduced when artificial intelligence is used in systems, and the mechanisms that can be put in place to make these systems as secure as possible.

The project is led by the Human-Centred & AI Security, Ethics and Privacy (HASP) Lab of the VRAIN of the UPV. SPRINT, which started in November 2023, will last 2 years, until December 2025.

The project’s conclusions will be presented at the USENIX Security Symposium, one of the most important international cybersecurity conferences, from August 14 to 16, 2024 in Philadelphia, United States. (Source: Polytechnic University of Valencia)

Source link