Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 21301

Why Doctors Should Be Cautious While Using Deep Learning Applications In Clinical Settings

$
0
0

dlcs-bn

Machine learning is applied in various business areas, right from oil extraction to building self-monitoring optical systems. As the ML applications progressed, there has evolved a parallel concept of investigating loopholes, called adversarial attacks. If these attacks are not identified and mitigated quickly, it could leave the applications vulnerable to attackers. This is dangerous in critical business sectors such as healthcare and banking, and more so in the field of medicine, where the attacks may drastically affect human lives.

This article focuses on attacks in medical deep learning systems and looks into the adverse impacts it may have. We are considering a research study conducted by scholars at Harvard Medical School, where they explained practical use cases of adversarial attacks in medical imaging through deep learning, and showcases how its implementation can be deleterious on medical diagnosis.

The Progeny Of Adversarial Examples

Deep learning (DL) is a practical application of ML algorithms through neural networks. Off late, deep learning has found significant improvement with respect to its implementation in healthcare environment — especially medical imaging in radiology. It delivers quicker and accurate results compared to manual diagnosis by physicians. Deep learning is also being used to enhance  operating efficiency.

Although the growth has been positive, this has led to researchers seeking out vulnerabilities in deep learning systems called ‘adversarial examples’. This itself has paved way to a separate path for research and has gained quick attention among ML enthusiasts. First discovered by Ian Goodfellow and others at Google, adversarial examples leave ML loopholes out in the open. This has inspired researchers to come up with ways to trick DL neural networks. For instance, methods such as Deepfool manipulate image-related DL systems to misclassify the images in large datasets. This could be harrowing and disastrous if applied in medical imaging.

Example of Adversarial attack: Picture showing misclassification of cancer skin cells due to perturbation (Image courtesy: Samuel Finlayson and Isaac Kohane)

Roots Of Vulnerability In Adversarial Attacks

In the study, the scholars first discuss the impact of fraud in the ever-growing healthcare sector in the United States. They emphasise that fraud is prevalent due to the size of the ecosystem and committing a fraud is possible at all organisational levels. Moreover, the data associated with healthcare is transforming electronically with popular bookkeeping methods on the rise. This is affecting the economy drastically, since every data related to healthcare is critical. Manipulating health-related data, even on a small scale and on a continuous basis, may be at the cost of many lives.

The scholars provide an outline of seven factors that expose DL systems’ vulnerabilities significantly. They are listed as is from the study, below:

  1. Ground truth is often ambiguous”: This means physicians/radiologists often disagree with image classification in medical imaging since it is relatively new and has spurred controversies even though it is found beneficial. Attackers take advantage of this fact since medical experts already have a negative mindset on DL imaging.
  2. Medical imaging is highly standardised”: The stringent standards followed in medical imaging may limit DL techniques and this leads to systems becoming more susceptible to attacks.
  3. Commodity network architectures are used often”: When it comes to using computer vision in medical imaging, all methods by researchers follow a similar approach. This means, the AI models in imaging have same network architecture. This makes it even more transparent for attacks.
  4. Medical data interchange is limited and balkanised”: The mechanisms for sharing medical data is limited and only few companies or vendors provide data interchange services among different healthcare organisations.
  5. Hospital infrastructure is very difficult to update”: It is difficult for large health organisations to upgrade their medical portfolio since software systems that cater to the entire organisation are complex and are tough to handle when it comes to revisions, updates and latest machinery.
  6. Medicine contains a mix of technical and non-technical workers”: Since the medical sector consists of employees and staff from various disciplines, the level of knowledge in ML varies drastically. For instance, an operator who works a medical imaging device might find it difficult to distinguish adversarial attacks, unlike a ML engineer who would analyse corrupt components in the data very quickly.
  7. There are many potential adversaries”: In statistics, it is said that uncertainty is a given for an experiment. Similarly, in this case, the avenue for unknown, newer adversarial attacks is uncertain given that along with positive developments, the bad side of it might also show a new face.

These factors highly determine how an attack creeps into the DL system. It is advised that any healthcare organisation following and implementing ML or DL should strongly keep these factors in mind.

Conclusion

DL applications are always enticing to solve real-world problems since it achieves operational efficiency as well as help reduce costs. With sectors such as healthcare and banking slowly seeing a rise, the feasibility of DL applications with respect to data security as well as data genuineness can thus be checked before implementation.

The post Why Doctors Should Be Cautious While Using Deep Learning Applications In Clinical Settings appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 21301

Trending Articles