After crunching through thousands of chest x-rays and the clinical reports that accompany them, an AI has learned to spot diseases in those scans as accurately as a human radiologist.
The majority of current diagnostic AI models are trained on scans labeled by humans, but that labeling is a time-consuming process. The new model, called CheXzero, can instead “learn” on its own from existing medical reports that specialists have written in natural language.
The findings suggest that labeling x-rays for the purpose of training AI models to interpret medical images isn’t necessary, which could save both time and money. A team of researchers from Harvard Medical School trained the CheXzero model on a publicly available data set of more than 377,000 chest x-rays and more than 227,000 corresponding clinical reports. This taught it to associate certain types of images with their existing notes, rather than learning from structured data that had been manually labeled for the task. CheXzero’s performance was then tested on separate data sets from two different institutions, one in another country, to check that it was capable of matching images with the corresponding notes even when the reports contained differing terminology. Related StoryDoctors using AI catch breast cancer more often than either does aloneA new study shows that artificial intelligence can also handle more than half of scans automatically, dramatically reducing radiologists’ workloads. The research, described in Nature Biomedical Engineering, found that the model was more effective at identifying issues such as pneumonia, collapsed lungs, and lesions than other self-supervised AI models. In fact, it was similar in accuracy to human radiologists. While others have tried to use unstructured medical data in this manner, this is the first time a team’s AI model has learned from unstructured text and matched radiologists’ performance, and it has demonstrated the ability to predict multiple diseases from a given x-ray with a high degree of accuracy, says Ekin Tiu, an undergraduate student at Stanford and a visiting researcher who coauthored the report. “We are the first to do that and demonstrate that effectively in this field,” he says. The model’s code has been made publicly available to other researchers in the hope it could be applied to CT scans, MRIs, and echocardiograms to help detect a wider range of diseases in other parts of the body, says Pranav Rajpurkar, an assistant professor of biomedical informatics in the Blavatnik Institute at Harvard Medical School, who led the project. “Our hope is that people are able to apply this out of the box to other chest x-ray data sets and diseases that they care about,” he says. Rajpurkar is also optimistic that diagnostic AI models requiring minimal supervision could help increase access to health care in countries and communities where specialists are scarce. “It makes a lot of sense to use the richer training signal from reports,” says Christian Leibig, director of machine learning at German startup Vara, which uses AI to detect breast cancer. “It’s quite an achievement to get to that level of performance.” Deep DiveArtificial intelligenceWhat does GPT-3 “know” about me? Large language models are trained on troves of personal data hoovered from the internet. So I wanted to know: What does it have on me? Sony’s racing AI destroyed its human competitors by being nice (and fast)What Gran Turismo Sophy learned on the racetrack could help shape the future of machines that can work alongside humans, or join us on the roads. DeepMind has predicted the structure of almost every protein known to scienceAnd it’s giving the data away for free, which could spur new scientific discoveries. OpenAI is ready to sell DALL-E to its first million customersBut the company has had to rush out fixes to the image-making model’s worst flaws to do so. Stay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories, upcoming events, and more.