Hacking Medical Images:
Fighting AI with AI
Recently, a new study led by Ben-Gurion University (Israel) in which AI was used to attack medical centers was published. The authors claimed they had developed a Deep Learning algorithm that was able to add or remove lung cancer from CT scans. The software was tested by attempting to mislead the diagnoses of three radiologists with 2, 5 and 7 years of experience, achieving an average success rate of 99.2% for cancer addition and 95.8% for cancer removal. These are alarming results, as the study proved that such attacks could be very harmful for medical institutions and their patients if somebody with bad intentions would use them. Therefore, designing countermeasures to guarantee the integrity of the medical data and protect health centers from this potential threat is a must for medical informatics departments. Below, we unveil what’s behind these AI cyberattacks and how they can be prevented and disarmed.
The cyberattack that was proposed in the study was based in the use of Generative Adversarial Networks (GAN), a state-of-the-art Deep Learning technique introduced in 2014 by Ian Goodfellow, one of the most reputed AI researchers across the globe. The main idea behind this method lays in training two neural networks, called generator and discriminator, whose purpose is to defeat each other. On one hand, the objective of the generator network is to create fake data, that must be as realistic as possible. On the other hand, the goal of the discriminator network is to distinguish real data from the fake data created by the generator network. Both networks improve iteratively, as in each stage of the training the generator network is better at falsifying data and the discriminator is better at distinguishing false data, forcing each other to get better.
For the particular application of adding and removing lung cancer from CT studies there were two GAN models trained, one for the addition of lung cancer and another one for the removal and, therefore, there was a specialized generator network for the injection of fake tumors and another one specialized on removing them.
The first line of defense of medical centers should be handled by cybersecurity experts, designing communication networks robust to any kind of intrusion. But at QUIBIM we believe that even though cybersecurity is essential, there should be one more layer of protection against this kind of malware. We propose to defeat AI using AI if the hospital network is compromised. As explained before, GAN approaches consist of two networks. In the attack proposed in the study mentioned previously, the researchers developed a generator model able to misguide clinical decisions. Our defense approach is based on the opposite, that is, the design and implementation of an excellent discriminator network able to surpass the generator model and, therefore, being able to detect the artificially generated images. This network purpose would be to learn to look beyond what a human eye can detect, which would make a perfect barrier against generative attacks.
Some final words
In conclusion, Ben-Gurion University publication helped to highlight that even though AI unlocked many new opportunities and allowed major breakthroughs in the medical sector, it also favored the development of potentially harmful applications. Hence, there is a need for medical centers to build a strong AI strategy, not only to improve patient care, but to be prepared and protected against AI malware. Because, as all revolutionary technologies, AI can be used for the best, but for the worst as well.
At QUIBIM, our mission is to improve humans’ health by applying advanced and innovative image processing techniques to radiological images, and we guarantee that we will make use of our extended expertise and advanced algorithms to be prepared against any malignant software that aims to mislead the results of our analysis pipelines or our customers’ clinical outcomes.