UI researchers identify long COVID-19 with chest X-rays
The model detects long COVID-19 through x-rays with takes data points from 2D lung images constructed from 3D CT lung scans
November 27, 2022
University of Iowa researchers identified a new way to find effects of long COVID-19 like lung damage through 2D chest X-ray exams in a recent study involving UI Hospitals and Clinics patients.
Until now, patients experiencing long COVID-19 required a 3D CT scan to determine compromised lung function. UI researchers discovered a way to identify lung damage in participants using 2D X-ray images.
A CT is a scan that combines a series of X-ray images taken from different angles around the body.
Long COVID-19 is the continuation of the virus’s symptoms after infection. Nearly one in five American adults who have had COVID-19 continue to experience long COVID-19, according to a press release from the Centers for Disease Control and Prevention.
The resulting study, “Contrastive learning and subtyping of the post-COVID-19 lung computed tomography images,” was published online on Oct. 11 in the journal Frontiers in Physiology.
The UI study was based on 100 CT scans from participants at the UI Hospitals and Clinics who were originally infected with COVID-19 and continued to experience symptoms.
UI researchers plan on progressing this information by reproducing data from an increased number of people with different variants of COVID-19 using the same approach.
Ching-Long Lin, UI professor and chair of the department of mechanical engineering in the College of Engineering, said, “The hope is to test the model more fully, so it can be used at hospitals and clinics that perform chest X-rays.”
The scans were done with the lungs at full inspiration and expiration — when the patients inhaled and exhaled — so researchers could determine abnormalities in the lungs and if participants were trapping air.
Co-author of the study, Alejandro Comellas, said the team used the CT scans and deep learning methods to come across the same abnormalities using a scout scan.
RELATED: New COVID-19 CDC guidelines won’t drastically affect Johnson County due to high vaccination rates
The study’s goal was to find a new method to examine the lungs of long-COVID-19 patients that is more accessible and less costly.
“A scout scan is just looking at your chest and your lungs, not as a 3D CT but closer to a 2D ray,” Comellas said.
The researchers used several types of techniques used in scientific-based studies including deep learning methods, which are machine learning techniques that teach computers to do what comes naturally to humans.
Lin said the methods used were contrastive learning, transfer learning, and artificial intelligence to create the model. Contrastive learning is a model that learns from composite 2D images constructed from 3D CT images to detect compromised lung function in long-COVID patients.
Transfer learning involves using previous knowledge and skills in new problem-solving situations by conveying lung diagnostic information from a CT scan to a chest X-ray.
“They were able to identify the people with lung abnormalities either in inflammation or air trapping,” Comellas said. “They were able to reproduce what we’re finding with CT scans and not require them to have this technology.”
Though the CT scan is more accurate with better contrast and details, the technology required is expensive and inaccessible for some.
Lin said the model means a 2D chest X-ray not only can reach similar conclusions but is easier to come across than a CT scan.
“This is important because chest X-ray equipment is available, accessible, and less costly than CT scans,” Lin said.
Comellas and Lin said the model will have an effect on all patients who are suffering from long COVID-19, and it has implications for people globally as well as in the U.S.
“The hope is to test the model more fully so that it can be used at hospitals and medical clinics that perform chest X-rays,” said Lin. “Chest X-rays are more accessible at regular clinics, while CT is more accurate due to better sensitivity. Integration of the two models can combine advantages of both images.”