We stand with Ukraine

How Can Artificial Intelligence Address Health Disparities? 

Francesca Spaggiari - 2 November 2021

How Can Artificial Intelligence Address Health Disparities? 

How can artificial intelligence address health disparities? This is the question that multi-disciplinary teams from institutions representing the Big Ten Academic Alliance tried to answer in the inaugural Big Ten AI Bowl. Hosted by the Institute for Augmented Intelligence in Medicine (IAIM) at Northwestern University, the Big Ten AI Bowl entered the final round last week, and the team from Penn State University, mentored by expert.ai NL enthusiasts, won first place.

Here’s a look at their winning project.

The opportunity for NLU and NLP in simplifying health literacy

Hospital discharge instructions contain important information that patients and their caregivers should be able to easily understand. However, most health-related literature is often written at a readability level that is too high for most intended recipients. Information beyond the intended reading level often causes individuals to feel confused and powerless, and it can lead to worse health outcomes, increased readmissions and increased costs for the healthcare system.

While most of the current Artificial Intelligence solutions related to the discharge process revolve around planning and care coordination, there is an opportunity for health organizations to support patients by improving the readability of discharge instructions and empowering them to take control of their health outcomes as they navigate the health system. This is where NLU/NLP can help.

Improving the readability of discharge instructions

This is the exact problem that the Penn State University team wanted to address. For the past six months, the Penn State team of students from computer science and medicine have been working on “Simplify,” their groundbreaking NLP-based healthcare text simplification tool designed to reduce health disparities. 

The team proposed a hybrid AI and rules-based system that would suggest simplified language for the content used in discharge instructions, where providers would ultimately approve or reject the suggestions. 

The proposed system is broken down into three major stages, each performing distinct tasks. This modularity allowed the team to develop each stage concurrently while avoiding the potentially costly task of fine tuning an all-in-one AI solution.  

The first stage consists of evaluating the reading level of the words, phrases and text as a whole and flagging difficult or ambiguous words and phrases for potential rewrites. This stage can be implemented using existing reading level metrics like the SMOG or Gunning Fog Index, statistical measures of word rarity in large corpora, and medical dictionaries available through the Unified Medical Language System (UMLS). 

The second stage consists of suggesting lexical substitutions and clarifications. In this stage, the system will identify appropriate word-level replacements through more common synonyms or generalizations. It also will prompt the provider to clarify ambiguous instructions, e.g., suggesting length-of-time clarification for phrases like “if pain persists.” This stage can be implemented as a search algorithm of both a general ontology (e.g., WordNet) and a specialized medical ontology (e.g. SNOMED CT) compared against word-level difficulty metrics.

The final stage is a phrase- and sentence-level simplification suggestion. This stage takes multiple words (flagged in the previous stage) and automatically suggests simplifications that could include reduced grammatical complexity, splitting off independent clauses as new sentences, and additional lexical substitution. This stage will be implemented as a transformer model trained to perform text simplification on a parallel corpus of sentences from English Wikipedia and Simple English Wikipedia.

Each stage, and the system as a whole, are supposed to be evaluated using discharge instructions available in the MIMIC-III dataset, and performing both qualitative and quantitative evaluation of the quality of the suggested simplifications using both human judges and existing reading-level formulae.

Congratulations to the Penn State team, and many thanks to the Institute for Augmented Intelligence in Medicine for hosting this great event.

Learn more about the competition here.