Artificial intelligence has become a buzzword in many fields. For manufacturing, it may mean a loss of jobs, but for radiologists, implementing artificial intelligence may result in a more efficient use of time, and the ability to focus on more difficult cases.
Day to day, radiologists are inundated with images and data. For example, a radiologist will typically view 4000 images in a CT scan of a patient with multiple traumas. As a result of the immense amount of data they must process, radiology is reaching a productivity crisis.
While improved imaging technology can provide better outcomes for patients, how can technology improve the radiologist’s capabilities and job functioning?
Artificial intelligence and deep learning may offer a solution.
Mass General has partnered with AI computing company NVIDIA to process their database of approximately 10 billion medical images through deep learning algorithms. With NVIDIA’s server, which was designed for AI applications, the deep learning algorithms written by NVIDIA engineers and Mass General data scientists will aim to improve detection, diagnosis, treatment and management of disease.
Mark Michalski, Executive Director of the Center for Clinical Data Science at Mass General, which will continue to develop this technology, says that “the idea of neural networks [utilized in deep learning], have been around for a while. But what hasn’t been is GPU’s [graphical processing units], and lots of data.”
According to Michalski, deep learning requires processing vast amounts of data in order to allow the technology to “learn.” NVIDIA’s processing power is powerful enough to accommodate the amount of data necessary for this process, and Mass General’s data library offers the tremendous amount of information deep learning requires.
Neural networks are inspired by how the human brain uses neurons and synapses to learn and complete complex tasks. Says Michalski, “A lot of simple constructs or simple modules, when you put them together, they can do some not so simple things like learning and changing behavior over time, the same way your neurons do in your brain.”
At Mass General, the Center for Clinical Data Science will continue to develop these AI capabilities, with a focus on bringing these tools to clinicians and their patients. The center is beginning its work with a focus on radiology and pathology.
Michalski says that earlier versions of computer vision technology makes radiology a logical next step; “imaging is a nice place to start…it’s an area that has very close analogs to tasks that have already happened in the computer vision world.”
“The computer vision community was doing things like classifying dogs and cats images to see which automated systems could be the best at those kinds of classification tasks. Show an image, and what does the computer ‘see,’ so to speak,” he continued.
Applying deep learning to these same types of tasks allows for the computer continue improving. Most significantly, this type of learning doesn’t require increasingly more complex code to handle more complex images, which makes the technology much more scalable.
As a trained radiologist, Michalski is eager to see this technology improve the profession. “In radiology, we’re getting many many tasks we have to do. Everything from reading chest x-rays, to looking for pulmonary nodules. Many of these tasks are sort of hunt and search…they take a lot of time. If we could automate, we would.”
Says Michalski, “When I think about this, I think about the opportunity to make radiologists do a job that allows them to operate at the top of their license, and the ability to spend more time on the harder cases, as opposed to work to process very quickly a big stack of images that don’t have those difficult problems that need a human to solve.”
Boston Children’s Hospital also recognizes the need to improve the tools available to radiologists. Earlier last year, they partnered with GE Healthcare to collaborate on developing digital tools that will improve physicians’ ability to read radiology scans.
The focus of their technology, initially, is pediatric neurological scans. According to Sanjay Prabhu, MBBS, pediatric neuroradiologist at Boston Children’s, “Pediatric brain scans of children under the age of four can be particularly tricky to read because the brain is rapidly developing during this period of childhood.” Due to the scarcity of pediatric neuroradiologists, the aim of the technology is to allow physicians of different expertise levels read these scans.
This decision support platform with be pre-loaded with normative reference scans, of children of different ages, for doctors to view alongside the scan of the pediatric patient. These normative scans will provide a benchmark to compare the patient’s pathology to. This digital tool will be the first of many, and the aim is to develop hundred of similar apps by 2020.
Sarah Schroeder is a Masters of Public Health candidate at the University of Texas School of Public Health in Austin, TX. She is studying health promotion and behavioral sciences, with a concentration in health disparities. Sarah recently completed her practicum with the Texas Tribune, where she helped curate and develop stories for an online health newsletter. She is interested in using journalism and storytelling to highlight important health issues and empower readers to create a healthier world for themselves and others. As an editorial intern with MedTech Boston, she looks forward to learning more about medical technology, while further developing her skills as a health journalist.When not reading, writing, and learning about all things health-related, Sarah enjoys cooking, practicing yoga, and sewing garments.
Send this to a friend