Image may be NSFW.
Clik here to view.
Artificial intelligence has brought about a paradigm shift to healthcare which is powered by the increasing availability of data and the progress of analytics techniques. In other areas, AI is also democratising healthcare by providing access to affordable solutions, especially in rural areas and low-income communities. Doctors are also relying on remote diagnostic applications to diagnose and give accurate treatments.
The increasing use of AI has led to a spurt of healthtech startups which use machine learning algorithms and capture digital and biometric data. Startups are also accelerating computing power, and advances in biological and medical sciences including in genomic sequencing to bring innovative solutions to the market. Today, sensitive health data including biometrics is being used to train the machine learning algorithms which is revolutionising new drug discovery and cures, more accurate diagnostics, and personalised treatments.
Powerful AI techniques are now being deployed to assist in clinical decision making — for example, IBM Watson is deployed in Manipal Hospitals for cancer care.
According to industry estimates, 50 percent of the more than 3.4 billion smartphone and tablet users will have downloaded mobile health apps. However, are healthcare breakthroughs also coming at a price? There is a string of policy and ethical dilemmas associated with it, along with risks. For example, there are data breaches that policymakers have to confront before supporting the use of AI in healthcare.
Image may be NSFW.
Clik here to view.
In this article, we will discuss the ethical and legal risks associated with the use of AI in healthcare.
- Data is susceptible to hacking and privacy breaches — case in point the 2017 cyber attack on the NHS in the UK. Also, experts believe that biometric data collected from wearables can also be potentially hacked and used for advertising purpose or used to create fake news for political or social campaigns. Even though data protection guidelines have come into force and there’s a call for anonymisation of data, we still have a long way to go to ensure data security. Blockchain technologies offer a more robust solution to protect data from tampering.
- There is also a need for the development of standards for data collection in healthcare. A research report emphasised that the testing of medical AI technologies has to be led physicians, academicians, industry experts and other stakeholders. There should be open-source development as well to address the key issues which can facilitate the growth of medical AI. Lack of standards will give rise to legal issues especially when it comes to patient outcomes.
- Some of the other ethical issues raised by policymakers are the difficulties in validating the outputs of AI systems. There is a persistent risk of inherent bias in the data used to train AI systems and with whom does responsibility lie in the use of AI for supporting decision making.
- Recently Stanford University researchers pushed for updated ethical guidelines for the use of AI in healthcare and also rallied for the education of physicians in building AI systems. The journal cited author Danton S Char how one should address the challenges related to bias and questions about the relationship between patients and machine-learning systems as soon as possible. There is a growing body of research around how AI will redefine the patient-doctor relationship (in case of assisted robots used for elder care) and the sociological and legal concerns around it.
- There is an imminent need for standardisation in machine learning systems which should be built to reflect the ethical standards. Some of the key steps that can be taken to implement these standards are through policy enactment and task-force work.
Outlook
Although the increasing use of AI technologies has been attracting substantial attention, experts cite that real-life implementations are facing several roadblocks. There are regulatory concerns, surrounding AI, for example, currently there are no standards to assess the safety and efficacy of AI systems. However, US FDA has made an attempt to provide guidance to test the efficacy of AI systems. For example, the first guidance system classifies AI systems as ‘general wellness products’, and should be regulated as devices intended for low-risk use. The second guideline emphasises the use of real-world evidence to access the performance of AI systems.
Lastly, it states that the establishment of rules for the adaptive design in clinical trials, which would be widely used in assessing the operating characteristics of AI systems.
Also See:
The post Grave Legal And Ethical Risks Associated With AI In Healthcare appeared first on Analytics India Magazine.