Science and Tech

Through Artificial Intelligence: Researchers discover biases in the diagnosis of COVID-19 through X-rays


The scientific research of 2 Chileans was published in Nature.com. Despite the fact that 96.19% accuracy was initially obtained in the X-ray diagnoses, it was detected that these did not consider patient characteristics such as gender, age, distribution of patients and demographic characteristics, which in a structure of larger data would lead to errors in the results.

Medical technology concept. Remote medicine. Electronic medical record.

Andrea Riquelme, Journalist.- According to the study “Biases associated with the database structure for the detection of COVID-19 through X-rays”, carried out by Reinel Tabares, data scientist and postdoctoral researcher, together with the professor of the Faculty of Engineering and Sciences of the Adolfo Ibáñez University (UAI), Gonzalo Ruz, the machine learning – or machine learning – did not consider the characteristics of the patients in the diagnosis of COVID-19 through chest X-rays, one of the methods used for early detection, due to biases in the databases used.

The study found highly heterogeneous and unstructured data: incomplete patient information, different conditions and technologies present during X-ray acquisition, class imbalance, and careless mixing of multiple data sets.

“In this case, given the contingency of the pandemic, there was a great rush to grab databases without having much of an eye and start training models. As an example, in these databases they combined data from x-rays captured with a machine and then images captured with another machine and that generated images of different quality,” Ruz highlighted.

.

“This research, which initially sought to prove the effectiveness of the diagnosis, we integrated into the work we are doing at the UAI for the Ethical, Responsible and Transparent Algorithms project and we also began to analyze the databases from a perspective of equity and responsibility, among other principles”, explained Tabares.

In the research, 19 COVID-19 detection databases were analyzed using chest X-rays in which, despite the fact that when the information from one of the most popular databases was verified, a 96.19% accuracy was obtained. accuracy, it was detected that the diagnoses did not consider the characteristics of the patients in the radiographs, such as gender, age, distribution of the patients and demographic characteristics, which in a larger data structure, would lead to errors in the results.

“You can have a (data) model of just older patients and then use it to diagnose children, but that requires that your base is already trained accordingly,” Ruz said.

To verify this information, we used Aequitasan ethical information bias auditing tool from machine learning that allows informed decisions to be made to develop and deploy predictive risk assessment tools and, with it, ways to mitigate them.

“Today, even in calls for technology development, there is awareness of this and within the bases of the contests or tenders they are incorporating requirements so that the data with which they are trained meet certain ethical and bias management standards” Ruz said.

Although the study was used to analyze COVID-19 diagnoses using machine learningthe results and learning can be extrapolated to any research area.

“It is essential to consider this type of in-depth analysis of the data and its metadata, as well as perform a visual examination of the information in the event that the type of data used is images, with the aim of early detection of possible biases and trying to mitigate them, in order to obtain results with ethical and responsible standards, especially if we are addressing such a delicate health issue as COVID-19,” Tabares concluded.

The Ethical, Responsible and Transparent Algorithms project is an unprecedented initiative in Chile, which is being carried out by the UAI with financing from the IDB Lab (the IDB Group’s innovation laboratory) and in alliance with partners from the public and private worlds. Its objective is to install capacities and standards to incorporate ethical considerations in the purchase and use of artificial intelligence and automated decision algorithms in state agencies, and in the formulation and development of these solutions by technology providers.

Source link