Skip to main content
<p>On September 1, 2021, Dr. Emily Webber delivered a Fairbanks Ethics Lecture entitled, "Ethical Challenges in AI and Other Applied Technologies." Click to find out more about Dr. Webber's talk</p> <p>&nbsp;</p>

Ethical Challenges in AI and Other Applied Technologies

Image of Dr. Emily Webber

On September 1, 2021, Dr. Emily Webber, MD, FAAP, FAMIA presented “Ethical Challenges in AI and Other Applied Technologies” for the Fairbanks Lecture Series in Clinical Ethics. 

Dr. Webber is the Chief of Medical Information Officer for IU Health and Riley’s Children’s Health and an affiliate scientist at the Regenstrief Institute and chairperson for the American Academy for Pediatric Counsel on Clinical Information Technology. She is also a practicing pediatrician and is board certified in pediatrics, pediatric hospital medicine, and clinical informatics. Her work is currently focused on optimization of health IT, applications to improve quality care and patient safety and innovation. 

In “Ethical Challenges in AI and Other Applied Technologies”, Dr. Webber emphasized the importance of asking good questions in order to recognize the potential benefits/downfalls of a certain AI program for use in healthcare. She defined the most important questions to ask as:

  1. What is the range of intelligence of the AI?
  2. Does it detect patterns and observe outcomes?
  3. Did a human write all the logic rules?
  4. Is the machine/software algorithm adjusting on its own or adjusting based on a rule its programmers provided?
  5. Do the features of the learning model mean something to a human observer? 

She also warns of some risks that come along with AI that must be considered. The first risk she identifies is bias and she states that there are multiple points for bias such as not having enough data points in a set, misapplication of resulting output and inadequate measure of accuracy. In order for medical AI to properly serve the community, the data set must be large enough to adequately cover all persons that it may be used on.

Another risk Dr. Webber identifies is the pressure to adopt and scale quickly, which, in response, we must ask if it will be likely to be used and if it will make it into the mainstream healthcare system, as many technologies do not make it that far. She also brings up the risk of public perception and about how we must consider convenience, privacy, choice, and individuality when looking at AI. There is a built-in reluctance to AI in healthcare and the public may not react well to having AI used on them. 

Dr. Webber also examines case studies that the important questions can be used and what raises red flags for her in AI technology. Despite these concerns though, she claims that we shouldn’t think we should never try AI but must keep a healthy skepticism and continue to question new technologies. She advocates for a balance between innovation and discipline to regulate and ask questions, analyze the data used, and look at the skills needed, including comparing human performance to AI and determining clinical consequences of AI.

Find Dr. Webber's talk online, here. 

The views expressed in this content represent the perspective and opinions of the author and may or may not represent the position of Indiana University School of Medicine.
Default Author Avatar IUSM Logo
Author

Emily Varanka

Emily is currently pursuing a Masters in Philosophy with a concentration in bioethics at IUPUI. She is also the graduate assistant in the IU Center for Bioethics.