Why artificial intelligence in health care is harder than you would think

While the health care industry is likely to reap the benefits of artificial intelligence, several glaring issues need to be fixed or at least be mitigated

healthcare - stethoscope, tablet / digital health records
Thinkstock

AI is the next industrial revolution. Although in its nascency, we are already seeing bold strides in areas as varied as extracting metadata from videos, voice recognition, natural language processing, handwriting recognition and so forth. The impact of AI can be felt across industries from military to media and entertainment. Arguably, the area that we are most likely to see growth is in the health care industry. According to a recent report from PwC, health care is one of the sectors most likely to reap the benefits of AI. Many companies like IBM are investing in providing AI services in health care such as IBM Watson Health.

There have been several deployed applications in the health care industry such as using chatbots for triage (Babylon Health), personalized care in treatment of spinal problems (Medicrea), and identifying elderly patients who are at risk of falling (El Camino Hospital and Intermountain Healthcare). There have also been notable failures such as a $62 million project carried out by IBM for MD Anderson in Houston that produced no tangible benefits and resulted in skepticism regarding AI in medicine.

Many areas of health care can benefit from AI. While not an exhaustive list, some examples include:

  • Diagnostics and pathology: AI can be used to classify radiology studies by identifying or locating illnesses or pathologies in images using deep learning networks. Expert systems have been used for many years in medicine. Mycin was one of the early expert systems developed to identify bacteria causing severe illnesses and it led the way toward more advanced expert systems that showed moderate success. Computer-aided diagnosis (CADx) and computer-aided detection (CADe) have been used both in radiology as well as pathology—not to replace doctors but to narrow down possibilities.
  • Drug discovery: Many of the largest pharmaceutical companies such as Pfizer and Merck are in the process of implementing or have implemented AI solutions. There are potentially enormous rewards in this space because it can reduce the cost of developing, approval and marketing. Currently this can cost firms well over $2 billion and take many years before the drug can be released onto the market. Furthermore, less than 10 percent of drugs that reach the clinical trial Phase III will get past the FDA. Fortunately, the pharma industry benefits from having large amounts of data that can be used for building machine learning solutions, so implementation of AI in this industry is easier. Applications include compound screening, target validation (determining whether the drug actually targets the disease), customization (customizing drugs for specific patients), repurposing (using existing drugs for other diseases), and literature searches. Much of the work in this space is based on machine learning models such as generative adversarial networks (GANs) and reinforcement learning.
  • Insurance: In the case of insurance, AI is being used in many contexts from scanning in text from documents, to determining fraud, identifying risks or in preventative medicine. The insurance industry benefits, first, from vast troves of data and, second, extensive mathematical analysis done to that data.
  • Robotics: One of the most interesting uses of AI is the use of robots. Robots are used in surgery, in drawing blood, helping with assisted living, and even exoskeletons to assist people who have lost the use of their limbs.

While we are seeing substantial investment in AI from the health care industry, there are, however, several glaring issues that need to be fixed or at least be mitigated.

The first is data. Training data is essential for machine learning; in fact, the volume of data needed can exceed what many organizations can generate. There are several reasons for this: curating data for training is often manual and time intensive; there are often organizational barriers imposed by regulation, by security, or through simple byproducts of multiple organizations seeking to share data having different and often incompatible ways of storing, accessing and managing it. There are also privacy issues that prevent organizations from divulging sensitive data or storing it in cloud platforms; sometimes incorrect assumptions are made about the underlying data thus preventing a static ground truth from being established; and finally data is often not statistically sound, which reduces its efficacy for training.

The second issue is lack of transparency, or the black box effect. Currently there are limited strategies to uncover why a decision was made by an AI. Even determining which set of variables are the most relevant for the AI model is not always easy. One known method, for example, is to use dimension reduction methods such as principal component analysis (PCA) to select a set of variables from a larger set of variables without compromising predictive value—in other words picking the most relevant variables. However, in general, there are no known ways to really answer the question of why a particular decision was made by an AI. This poses challenges in fields such as medicine and aviation where the room for error is minimal and where there needs to be high confidence in the decisions or predictions made by the AI. One approach to mitigate this is to use a human in the loop to understand the limitations of the model and ensure consistency with decisions made by people.

The third is governance. The health care industry, in particular, is highly process-driven for good reasons. Inserting an AI in the process can often challenge the decision-making mechanisms used by the organizations and so organizations need to adapt. When should a decision by an AI be trusted? Is an AI to be used as a final decision maker or as an adviser to recommend certain decisions? How do you match outcomes to the decisions made by the AI, and what is the feedback process to make changes to the AI if errors are discovered? Is the staff equipped to use AI and do they possess adequate skills such as understanding the difference between correlation and causation, or having enough domain knowledge to reject decisions or predictions if they don’t make sense? For example, because children don’t drink as much caffeine as adults, it is possible to construct a relationship between height and caffeine consumption, but not understanding the reasons for that connection might lead one to predict that tall people drink more caffeine than short people.

Finally, there are regulatory and ethical issues regarding AI. The most obvious ones concern privacy. Both HIPAA (Health Insurance and Portability Accountability Act) in the US and GDPR (General Data Protection Regulation) in Europe place constraints on the type of patient data that can be made public (even in aggregate), which can act as an impediment for constructing training sets for AI. Additionally, when and how an AI can be used in the industry is a concern. For example, should an AI deployed in a hospital to diagnose diseases have to go through FDA approval? As artificial intelligences become more sophisticated, ethical issues also start to emerge. Just because an inference can be drawn through the AI, doesn’t mean that it should. If an AI is used to prevent some people from receiving health insurance then the reasons should be clearly understood to prevent implicit and harmful biases, or if an AI is able to determine that someone has a certain disorder or disease by correlating public pieces of information gleaned from social networks or other public sources then an argument could be made that this constitutes a violation of privacy rights or is, at the very least, ethically questionable.

AI in health care is set to explode. Many insurance companies, health care providers, and equipment manufacturers are starting to use or are at least considering the use of AI in their processes. The benefits are clear: lower cost of medicine, potentially better diagnoses, better process control, and more advanced and lower cost drugs.

However, unlike many other industries, there are a slew of issues that fundamentally impact the rate or even willingness in some areas to accept AI. Greater transparency is needed to understand when and how an AI is used; health care workers need to be equipped with the knowledge and understanding to use AI, and a regulatory framework will need to be created to protect both health care workers and the consumers of health care services and products. Only by creating the controlled environment for the use of AI, is it possible for this powerful technology to benefit as many people as possible and minimize its dangers.

Copyright © 2018 IDG Communications, Inc.