Research

Our lab focuses on technical development of machine learning algorithms and their applications in medical imaging. Specifically, our research can be categorized into three groups: (1) development of innovative machine learning algorithms, typically for analysis of images, and often inspired by problems in medical imaging, (2) applications of deep learning to clinically relevant medical imaging problems, typically in close collaboration with physicians, and (3) investigation of general behaviors of neural networks, particularly in the context of image analysis. Some of the technical topics include domain adaptation, generalization in neural networks, anomaly detection, and semi-supervised learning. The applications, among others, have included breast, thyroid, and brain cancers as well as various problems in musculoskeletal imaging. We publish in both highly technical venues such as IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor [IF] = 24), IEEE Transactions on Medical Imaging (IF = 11), and the MICCAI conference as well as clinical journals such as Radiology (IF=29), Neuro-Oncology (IF=13), and JAMA Network Open (IF=13). Below are some example project pursued by our team. See the list of the publications by Dr. Mazurowski on Google Scholar: https://scholar.google.com/citations?user=HlxjJPQAAAAJ&hl=en&oi=ao.

 

Investigating the impact of class imbalance on training and evaluation of machine/deep learning models

Class imbalance is a property of data where there is a significantly diferent number of cases from different classess. This is a very common occurence in machine learning. An example is cancer screening data where the cancer cases may constitute 1% or even less of all data while normal cases may constitute 99% of the data.

In our work we examined the impact of class imbalance on training traditional perceptrons as well as on training of modern convolutional neural networks. This resulted in two of the most cited papers in the journal Neural Networks in their respective years (Mazurowski et al., Neural Networks 2008), (Buda, Maki, Mazurowski, Neural Networks 2018).

Our work on the topic has been cited more than 2500 times.

The figure shows performance curves for different methods addressing class imbalance given a different number of minority classes (out of 10). The top rows (a-c) show results for MNIST and the bottom rows (d-f) show results for CIFAR-10.

REPRESENTATIVE PUBLICATIONS:

 

Anomaly detection through deep learning-based image completion

Figure illustrating identification of abnormal locations using image completion. Figure from Swiecicki et. al. Scientific Reports 11:10276 (2021). License CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).

I some cancer screening settings, availability of data containing the signs of cancer is poor, yet there is an abundance of data from healthy (so called “normal”) subjects. We developed an algorithm that can use data for normal subjects to detect cancer in images that contain it. We achieve through image completion. An image completion algorithm, trained on images which do not contain cancer, learns of how to fill in removed parts of an image in a way that resembles normal tissue and matches its surrounding. We hypothesize that if the image completion generated in such way differs from what was originally in a given location, then such location represents abnormal tissue. We showed that the developed algorithm indeed has the capacity of distinguishing normal and abnormal locations and thus could be helpful in finding cancer.

REPRESENTATIVE PUBLICATION:

 

Radiogenomics and prognosis of outcomes in breast cancer based on MRI using radiomics and deep learning

  

Segmentations of breast lesions using convolutional neural networks.

Magnetic resonance imaging shows a high potential for various cancer-related problems including prognosis of outcomes, identification of tumor genomics, and prognosis. We used various computer vision, traditional machine learning, and deep learning tools to make a step toward reaching this potential. We have approached various problems related to the us of MRI in breast cancer. On the radiogenomic front, we were among the first groups to discover that gene expression-based intrinsic subtype, is related to enhancement dynamics in breast cancer (Mazurowski et al., Radiology 2014). We confirmed the association of imaging with genomics/pathology in multiple follow up studies inluding a radiogenomic study of 922 patient and 529 features (Saha et al., BJC 2018). Furthermore, we showed that MR imaging could be a predictor of patient outcomes and specifically distant recurrence-free survival (Mazurowski et al., 2019). Other addressed problems include prediction of Oncotype DX status (Saha et al., JCRCO 2018), response to neoadjuvant therapy (Cain et al. BCRC 2019), upstaging in DCIS (Harowicz et al., JMRI 2017, and prediction of risk of cancer in normal patients (Grimm et al., AR 2018).

REPRESENTATIVE PUBLICATIONS:

 

Assessment of knee osteoarhritis severity based on knee radiographs

Knee osteoarhritis is a common condition of the knee where cartilage (the tissue separating the bones) deteriorates resulting in pain. Precise determination of the extent of knee osteoarthritis is crucial in order to make treatment decisions including a decision of whether to proceed with surgical treatment such as total knee replacement. The current human-based assessment has its challenges including a significant inter-reader variability (different raters might assign different ratings). We have developed an algorithm which is capable of determining the extent of osteoarthritis at a level of physicians. The algorithm can also measure joint space narrowing. We made the algorithm publicly available here: https://github.com/mazurowski-lab/osteoarthritis-classification.

The figure shows results of deep learning-based automated detection of knee joints and segmentation of the bones.

REPRESENTATIVE PUBLICATIONS:

 

Brain tumors: prognosis of outcomes and radiogenomics using computer vision and deep learning methods

The figure shows tumor segmentation generated by a human observer (reference standard) in the first column, tumor segmentation generated by our algorithm in the second column, and the original image in the third column. The two rows correspond to two different MRI slices.
This figure shows the response of a random forest classifier that was used for segmentation of the magnetic resonance sequences into 5 classess: 4 tumor class and 1 normal class. In each of the images above, red corresponds to a high likelihood of a pixel being in the class represented in that image. Class 1 is normal tissue and the remaining classes are different compartments of the tumor (e.g., class 2 is enhancing tumor and class 3 is necrosis).

Glioblastoma (GBM) and low grade gliomas (LGG) are common primary brain tumors associated with a significant mortality. We have developed computer vision tools for analysis of brain MRIs. In one of our early study (Mazurowski et al., Neuro-Oncology 2013), we demonstrated that imaging features extracted from standard MR images by radiologists improve predictive accuracy of survival models as compared to clinical features only. While the features extracted by radiologists are useful in survival prediction, the burden of assigning a set of features for every patients is significant. Furthermore, interobserver variability exists between the radiologists in terms of the assigned features.

In response to this limitation, we have developed a set of tools for automated analysis of brain tumors including segmentation (Buda et al., CBM 2019), shape analysis (Czarnek et al. JNO 2078), and deep learning-based classification (Buda et. al, Radiology AI 2020).

REPRESENTATIVE PUBLICATIONS:

 

Harmonization of medical imaging data using deep learning

Medical images of one patient acquired using different equipment or acquisition parameters may have a very different appearence. This is a challenge when visually examining the image, performing a quantitative analysis of the images (e.g. radiomics) or training and evaluating deep learning models. It is a common knowledge, corraborated with one of our recent studies (AlBadawy et al., 2018), that when an algorithm is trained using data from one institution and tested on data from another institution, performance may decrease.

We developed two methods to harmonize breast MRI data. The first methods uses deep learning degmentation of parts of a breast MR image and then a peacewise linear pixel transformation in order to bring the pixel intensities to a common scale where the same tissue types are represented by the same pixel values in different images. The methods is described in (Zhang et al, 2018) and the code is availble at https://github.com/MaciejMazurowski/breast-mri-normalization.

More recently, we developed a harmonization method that uses cycle-consistent generative adversarial networks. Similarly as generating oil paintings from photographs, the method is capable of transforming images from one vendor/scanner to appear as if they were acquired using a scanner from a different vendor. We proposed some technical innovations in order to address limitations of Cycle GANs. The method is described in (Modanwal et al., 2019).

The figure shows MRI scans acquired using scanners (a: GE, b: Siemens) from different vendors illustrating both different intensity and texture.

REPRESENTATIVE PUBLICATIONS:

 

Adaptive computer-aided education in radiology

Diagram showing an overview of our adaptive educational system for radiology.

In this project, we aim to apply machine learning, computer vision, and recommender systems algorithms to improve education in radiology with focus on resident training. Specifically, we hypothesize that if challenging imaging cases can be identified for each trainee individually before they are seen and those cases are presented to the trainee instead of randomly selected cases, the educational outcomes will be improved. Toward this goal, we have constructed user models that use previous interpretations made by each trainee and are able to capture their strengths and weaknesses and predict challenging cases. For this purpose, we utilize both human-assigned features of images as well as features extracted automatically using computer visions algorithms. Those features are used by a machine learning algorithm trained on prior interpretations of a given radiologist-in-training in order to predict future cases that would be challeing for the trainee.

Our research workstation for reader studies
Our research workstation for reader studies

Within this direction of research, we also systematically investigated (through reader studies and statistical analysis) related aspect of trainee behavior such as the impact of perceiving a case as difficult on interpretation and error making.

The specific focus of this direction of our research shifted from mammography in the early stages of this project toward digital breast tomosynthesis in more recent work. Identifying eficient ways of educating radiologists to interpret digital breast tomosynthesis exams is of high importance due to the rapid shift of breast cancer screening toward this relatively new modality.