Jun 19 2017

It’s Benign!

by at 2:09 pm

My surgeon Dr. Shawna Willey walked into the patient exam room where I waited nervously. I first saw her thumbs up before her beaming face. I could breathe again!

My friends and I who recently turned 40 and starting our baseline mammograms can’t help but wonder about the lack of consensus on optimal cancer screening strategies, target populations, its benefits and harms. My colleague Dr. Jeanne Mandleblatt and her team have studied breast screening strategies for decades and have shown that biennial screening from ages 50-74 achieves a median 25.8% breast cancer mortality reduction whereas annual screening from ages 40-74 reduces mortality an additional 12% but introduces very high false positive rates. Many women and their families are subject to extreme anxiety due to the sheer number of repeat mammograms, false-positives, benign biopsies and in 7% of the cases an over diagnosis.

My experience and of my friends with breast cancer screening are raising many questions. How can we better predict the target risk population who must undergo screening early and often? Would this decision-making process consider risk factors, lifestyle, and patient preferences? How often are patients with a diagnosis of a benign breast condition on a stereotactic core needle biopsy upgraded to a non-benign diagnosis on an excisional biopsy which requires full sedation and surgery? What was the care journey like for other patients like me – Asian female, healthy, no family history? How many in the US and globally have access to the excellent care and follow-up that I was privileged to receive from Dr. Willey and her expert team?

Touted as the fourth industrial revolution, Artificial Intelligence is poised to empower clinicians, patients and researchers in answering these questions. What is AI? The term was coined by Dartmouth professor Dr. John McCarthy in 1956 and defined as “the science and engineering of making intelligent machines, especially intelligent computer programs.” Applications of AI in medicine have been limited by the complexity of highly cognitive processes such as making a medical diagnosis or selecting a treatment which require integration of thousands of datasets with millions of variables and multiple interactions between these variables. It takes years to collect, organize and publish practice changing results such as Jeanne’s screening study. What if we could use data that we routinely collect during the care process and effectively use AI to assist clinicians in real-time to make informed treatment decisions?

Interested in learning more about AI in Biomedicine? Want to engage with expert scientists and product developers in AI? Register for Georgetown’s Big Data in Biomedicine symposium on October 27th!

Companies like Google and Amazon are betting big on this. Jeff Bezos wrote “..it is hard to overstate how big of an impact AI will have on society over the next 20 years”; Google’s Sundar Pichai, when asked recently about the next big thing at Google responded “I can’t quite tell exactly but advances in AI and machine learning, we are making a big bet on that and this will bring a difference in many many fields”.

We cannot have a conversation about AI in medicine without discussing IBM Watson, the supercomputer that sifted through 20 million cancer research papers, and conducted a differential diagnosis on a difficult to treat leukemia patient in 10 minutes by combining genomic data with the power of cognitive computing. One concern that informaticians including my informatics mentor Dr. Bill Hersh have raised is that the publicity around Watson has mostly been from news articles and press releases, primarily from researchers at IBM and call for a more scientific analysis, not n-of-one case reports, of its abilities in clinical decision making. Systems like Watson will benefit from systematic expert knowledge input to guide the cognitive computing processes in navigating the complex medical pathways.

While still early, AI is already starting to make important contributions to Medicine says AI professor at MIT and a recent breast cancer survivor, Dr. Regina Barzilay. She and her team are asking all the right questions of data – “can we apply the sophisticated algorithms we use to predict customer’s shoe-buying habits to adjust treatments for cancer patients?” “Can computers detect signs of breast cancer or even pre-malignancy earlier than humans are currently capable of?” And the Holy Grail – “Can we use the huge quantities of data from smart toothbrushes, wearables, genomic sequencing, medical records to get to the first and right treatment?”

What next?

In the last decade, big data in biomedicine has focused on collecting (e.g. through mobile and other IoT) and organizing (e.g. cloud computing) information but all signs point in one direction for the next decade – real world applications of AI. We will witness the development of expert systems, question-answering systems and deep learning methods that begin to address complex real world problems in medicine. These will augment, not replace, human expertise. Winners will find ways to rapidly and accurately integrate human input with computational output. Usability of these tools by end users and human factors will be key.

While a true tech automation enthusiast at heart and practice, I will never forget Dr. Willey’s kind and soft words as she clearly explained my pathology report. She also carefully noted in my medical record the rare chlorohexidine pre-op antiseptic agent hypersensitivity that I had developed post anesthetic induction.

One more data point!

              Let’s continue the conversation:

No responses yet | Categories: From the director's office,Newsletter,Subha Madhavan | Tags: , , , , ,

Oct 24 2013

Keynote Talks at ICBI symposium: Stephen Friend and Eric Hoffman

by at 4:40 pm

Big Data in Precision Medicine was the focus of the 2nd Annual Biomedical Informatics Symposium at Georgetown, which drew nearly 250 people to hear about topics from direct-to-consumer (DTC) testing to mining data from Twitter.

The morning plenary on Genomics and Translational Medicine was kicked off by Stephen Friend, MD, PhD, President, Co-founder, and Director of Sage Bionetworks who discussed the “discontinuity between the state of our institutions and the state of our technology.”   This disconnect stems from the way results are presented in the literature and compared with one another in different scenarios, and sometimes interpreted into the clinic. “We are going to get different answers at the DNA, RNA, and functional levels,” said Friend, and different groups working on the same data can get different answers because science “context dependent” – dependent on the samples, technologies, and statistical parameters.  Our minds are wired for a “2D narrative” but the fact is we are all just “alchemists.”

Friend is a champion of open data sharing and turning the current system on its head.  We need “millions of eyes looking at biomedical data…not just one group, it’s immoral to do so,” Friend said.  We need to get rid of the paradigm, “I can’t tell you because I haven’t published yet.”   He said that GitHub has over 4M people sharing code with version tracking, and in fact hiring managers for software engineering jobs are more likely to look for a potential candidate’s work on GitHub than to considering credentials on a CV.

Sage created Synapse, a collaborative and open platform for data sharing, which he hopes could be the GitHub for biomedical scientists.   He would like to see large communities of scientists worldwide working together on a particular problem and sharing data in real time. As an example of this sort of effort, check out the Sage Crowdsourcing genetic prediction of clinical utility in the Rheumatoid Arthritis Responder Challenge.  His excitement for this future model for large scale collaboration was palpable in his closing remarks—a prediction for a future Nobel prize for “theoretical medicine.”

The afternoon plenary on Big Data in Biomedicine was led by a keynote talk from Eric Hoffman, PhD, Director of the Research Center for Genetic Medicine at Children’s National Medical Center who discussed “data integration in systems biology”  -which is a topic very close to the heart of ICBI.  He presented a new tool, miRNAVis, to integrate and visualize microRNA and mRNA expression data, which he referred to as “vertical” data integration or the integration of heterogeneous data types.  This tool will soon be released for public use.

Hoffman is considered one of the top world experts in muscular dystrophy research, having cloned the dystrophin gene in Louis Kunkel’s lab in 1987.  He has made an enormous contribution to research in this field along with dedicating countless hours to volunteering with children affected by the horrible disease.  He discussed a very exciting project in his lab on a promising new drug – VBP15, which has anti-inflammatory properties, and shows strong inhibition of NF-κB, and repair of skeletal muscle.  Most importantly, VBP15 does not have the side effects of glucocorticoids, which are currently the standard treatment for Duchenne muscular dystrophy. Hoffman said this new drug may potentially be effective against other chronic inflammatory diseases.  Let’s hope this drug will make it into clinical trial testing very soon!

More information about the keynote and other talks can be found on ICBI’s Twitter feed and at #GUinformatics, which provided snapshots of the day.

No responses yet | Categories: Symposium | Tags: , , , , ,