Using Machine Learning to Improve Neonatal Patient Outcomes at Cohen’s Children Medical Center

A patient in the neontal intensive care unit cannot verbally communicate with his/her doctors, but generates approximately one terabyte of medical data per year. Hospitals like Cohen’s Children Medical Center in New York have the opportunity to adopt machine learning initiatives to improve patient outcomes while reducing costs to serve.

Like all Neonatal Intensive Care Units (NICUs) across the world, Cohen’s Children Medical Center in Long Island, New York is inundated with data about the critical patients under their care. These data come from two main sources: monitoring devices regularly worn by the baby (e.g., a pulse oximeter which monitors heart rate and blood oxygenation) as well as a myriad of tests run by physicians (e.g., ultrasounds and echocardiograms). It is estimated that a typical NICU in the United States generates approximately one terabyte of data per bed per year.[1] Cohen’s has ~80 beds for NICU patients and is therefore generating ~80 TB of data per year. Properly interpreting the massive amounts of data is particularly crucial for infants, as they cannot proactively communicate indicators of underlying issues.

Despite the stakes and the tremendous amounts of data, neonatologists struggle with how to appropriately attack some of the most pressing issues facing premature infants. For instance, neonatologists disagree on the optimal rate to increase the number of calories being fed to extremely premature infants, despite this choice having a meaningful impact on long-term patient outcomes including cognitive development, growth and resistance to infection.[2] Highlighting the importance of standardizing the approach to neonatal nutrition, a study in the Journal of Pediatric Gastroenterology, Hepatology and Nutrition underscores, “we need to know the nutrients and their rate of administration for preterm infants that would match preterm neonatal metabolism and growth.”[3] In addition to the obvious tremendous risks to patient outcomes, this issue is a business problem as well. According to the non-profit organization the March of Dimes, the average cost of a patient per day in the NICU is $2,500-$3,000.[4] Mistakes in patient care can result in complications, like the development of infections, which can result in months of additional hospitalization and hundreds of thousands of dollars of avoidable cost.

After being born at just 26 weeks gestational age, my daughter spent 85 days in the neonatal intensive care unit at Cohen’s. When I asked pointed questions about the logic of medical decisions, often doctors would rely on personal experience when statistical modeling could have helped inform the decision. It is clear that there was both an opportunity to (1) synthesize and utilize the stream of data being produced by infants under their care and (2) gain better access to data being produced by hospitals across the United States. The United States alone has ~20,000 Neonatal ICU beds, implying annual production of ~20,000 TB of data on the health of premature infants.[5] Unfortunately, even if the doctors had access to the data being produced by their own and other hospitals, there is a limit to a human beings ability to critically absorb and analyze new pieces of information and ~20,000 terabytes per year is far beyond the scope of human capability.

In the near term, it appears the investment in improving the usage of data will continue to be incremental. Doctors at Cohen’s have emphasized experience shares and group huddles to draw from the expertise across the staff. In addition, new papers based on manually conducted research are regularly being published; when merited, the recommendations of these papers can be promptly reflected in updated patient care plans (e.g., the rapid adoption of surfactant to treat bronchopulmonary dysplasia).[6] However, in the longer term, machine learning has the capacity to dramatically alter the way neonatal medicine is practiced. Professor Geraldine Boylan is the director at INFANT Research Centre at University College Cork and is working to adapt machine learning to improve NICU patient outcomes.[7] Boylan, a neurophysiologist, is working to use AI to “objectively, consistently” look for patterns in patient data.[8] For instance, she has helped develop a “smart system” called NEUROPROBE which uses historical data to better understand the link between electrical brain activity and blood pressure. The program will hopefully identify infants which require treatment for brain injury faster and more accurately than current methods. Over time, systems like NEUROPROBE can be used across the spectrum of neonatal care decisions in conjunction with an attending physician to proactively identify patients who need treatment earlier to avoid unnecessary patient suffering and undue financial burden on families. In the next 10 years, NICUs like Cohen’s should seek to partner with research organizations such as INFANT to develop treatment plans based on statistical models rather than less reliable human experience.

As in any medical context, the implementation of machine learning raises ethical concerns. Doctors will need to grapple with whether counter-intuitive results represent a response like Watson’s “Toronto????” guess in Final Jeopardy, an answer obviously wrong on its face, or presents an opportunity for groundbreaking new advances in the field. Is it ethical to test the plan on a person in order to find out? (Word count: 789)

 

Footnotes:

[1] Khazaei, Hamzeh, “Health Informatics for Neontala Intensive Care Units: An Analytical Modeling Perspective,” in IEEE Journal of Translational Engineering in Health and Medicine, October 2015: https://www.researchgate.net/publication/282427778_Health_Informatics_for_Neonatal_Intensive_Care_Units_An_Analytical_Modeling_Perspective

[2] UCSF Children’s Hospital at UCSF Medical Center: Intensive Care Nursery Staff Manual. 2004. https://www.ucsfbenioffchildrens.org/pdf/manuals/15_FeedingPretermInfants.pdf

[3] Hay, William, “Nutritional Support Strategies for Preterm Infant in the Neonatal Intensive Care Unit,” in Journal of Pediatric Gastoenterology, Heptology and Nutrition. October 2018: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6182475/

[4] “AI Shows Success in Reducing Premature Births,” in Modern Healthcare, November 6, 2018: https://www.modernhealthcare.com/article/20181106/NEWS/181109949

[5] “Crical Care Statistics,” in Society of Critical Care Medicine, 2016: http://www.sccm.org/Communications/Critical-Care-Statistics

[6] Halliday, Henry, “History of Surfactant Since 1980,” in Karger Journal, May 2005: https://www.karger.com/Article/Pdf/84879

[7] Godsil, Jillian, “AI is Needed for Medical Health,” in Irish Technology News, November 9, 2018: https://irishtechnews.ie/a-i-is-needed-for-medical-health/

[8] Infant Center Research Site: http://www.infantcentre.ie/our-research/newborn-health (Company website)

Previous:

When Machine Learning Influences Your Vote: Lessons from Cambridge Analytica in the 2016 US Presidential Elections

Next:

Print-a-Part: How 3D Printing is transforming Medical Device Manufacturing

Student comments on Using Machine Learning to Improve Neonatal Patient Outcomes at Cohen’s Children Medical Center

  1. There is indeed a great potential to utilise the data produced in infants’ intensive care units to inform doctors and scientists in the medical fields to make better decisions. Unlike Watson, the machines don’t need to make decisions but rather synthesise data to make evidence backed recommendations that a human doctor must review with a critical eye and make a recommendation accordingly.

    I am glad that your daughter left the hospital after 28 days and I hope she is healthy now.

  2. Great submission. I agree with the premise of your article — we have an unprecedented ability to collect individual patient-level data in the healthcare space. However, our ability to make use of this data is sadly lagging, and computer learning/analytics might be able to fill the gap. While the machines may not have to make actual decisions (as mentioned by Energy), I still think your Watson comparison is valid. Unlike most other fields, mistakes in healthcare often have catastrophic consequences. Even machines’ synthesis of this data must be thoroughly scrutinized and verified for accuracy. Moreover, recommendations made by machines will still rely on human input (programming, etc). The way forward should be a slow, methodical one.

  3. Interesting article. A common problem in healthcare is too much data and taking time to process relevant information and not leave information out. A difficulty in implementing AI/machine learning in healthcare is that everything still has to be verified by a provider and ends up taking significant amounts of time (as with Watson at MSK). Both Watson and this machine learning initiative synthesize data and do not make decisions but physicians still reviews all relevant data. However it will take time and development to ‘teach’ the machine learning in healthcare.

  4. Is it ethical to test the plan on a person in order to find out? As in many industries, replacing human decision-making with machine learning poses serious risks. A potential path forward is to design systems and tools that are focused on assisting, rather than replacing, the decisions of experienced medical practitioners. To start, perhaps a hospital could could deploy a system alongside doctors and allow it to gather data and learn without revealing recommendations to doctors. Once a more robust technology has been developed, its recommendations could be leveraged alongside other data sources as inputs to help inform or speed up decision-making.

  5. As mentioned above, coupling machine learning with the doctor’s discretion is a great opportunity to accelerate progress in this arena. Much of the progress in neonatal care has been done the hard way through trial and error, so this provides a great way to pull both pull insights from past experiences and accelerate future learnings. Thanks for highlighting this potential, KMA!

  6. Great article! This definitely resonated with me, as I remember countless times feeling frustrated that my neonatal and pediatric patients were unable to tell me what they were feeling or experiencing and relying solely on whatever objective measures we had on hand from our physical exam and the machines/labs/scans we had. There is so much untapped potential in the data that these hospitals generate (e.g. temporal associations between signs/symptoms and clinical outcomes that could serve as warning signs). I do worry about our reliance on papers and clinical trials to be the only driving force of change in the field, as typically it takes 5-10 years for these recommendations to become standard of care. Finally, I think you nailed it when you brought up the gravity of the “Toronto” mistake in healthcare, clearly much graver than an incorrect Jeopardy answer.

  7. Great read thank you!

    I am very excited about the opportunities for ML in healthcare, and this is indeed one of the burning areas. I see that your proposition would work like this: based on past data and patient outcomes, AI will suggest set of actions. Building this is ofcourse not trivial, however I see some more barriers than that. While building the AI, you would need tremendous amount of phsycian input. Ideally, they also need to understand machine learning concepts. Getting these people on board for a long time might be very costly, and they also might not want it since it shrinks their industry.

Leave a comment