Categories
Week 9

Weekly Update 9

This week I worked on finalizing the regression model, testing out different combinations of bandpass filters, models, and noise. I trained on the fixed apoferritin dataset. This graph is of one of the best models trained so far:

Since the labels are the sum of 4 quality metrics, I compared the graph of the predictions with each individual label:

I also ran a modified SRGAN to get some evaluation results and look at the feasibility of using it for denoising:

Ground truth and generated image

Next week, I plan on incorporating the power spectrum into the regression model, evaluating the SRGAN, and summarize our results.

Categories
Week 9

7/23 Analysis

Started training SRGAN on the t20s dataset with correct projections. Also added an additional loss component that finds the MSE between the feature maps of a pretrained VGG model fed the generated data, and a pretrained VGG model that is fed the upsampled low pass of the original image patch. This was to ensure that the generated image preserved some of the features and details of the original image.

Also looked at how each individual label correlated with the predictions:

Categories
Week 9

7/22 Data augmentation

Added gaussian noise to input images as a form of data augmentation in hopes of reducing overfitting. The problem is that adding noise would theoretically reduce the quality score, but I gave it a shot anyway

  • Gaussian noise with .05 std (images are normalized to a range of 0-1): eval loss increased slightly
  • Gaussian noise with .02 std: eval loss stayed roughly the same
Categories
Week 9

7/21 Alignment

Created a class to load a model and its weights to use for prediction. Gave model/weights trained on single images of apoferritin dataset to Xiaochen for his alignment model. However, the aligned reconstruction worse than the original. There may have been a problem with the apoferritin dataset, so I retrained the model on the ts20 dataset. However, the model performs much worse:

Categories
Week 9

7/20

Tested various regression models:

  • 3 filtered input “residual network”
  • Single image network
    • lowpass image
    • original image
  • Projection model
    • lowpass image + projection
    • original image + projection
  • Combined
    • 3 filtered images + projection

Lowpass image + projection worked the best with a loss of 0.05, however, all the other models worked well too, with a loss ranging from .06 to .1

 

Categories
Week 9

7/20

Tested various regression models:

  • 3 filtered input “residual network”
  • Single image network
    • lowpass image
    • original image
  • Projection model
    • lowpass image + projection
    • original image + projection
  • Combined
    • 3 filtered images + projection

Lowpass image + projection worked the best with a loss of 0.05, however, all the other models worked well too, with a loss ranging from .06 to .1

 

Categories
Week 9

7/20 Regression work

Tested various regression models:

  • 3 filtered input “residual network”
  • Single image network
    • lowpass image
    • original image
  • Projection model
    • lowpass image + projection
    • original image + projection
  • Combined
    • 3 filtered images + projection

Lowpass image + projection worked the best with a loss of 0.05, however, all the other models worked well too, with a loss ranging from .06 to .1

 

Categories
Week 9 Weekly Updates

Weekly Update 9

This week, I finished training an SRGAN model. The following images are the original low-res image patches, the high-res projection patches, and the generated high-res patches:

While the generated images look well enough, upon closer inspection, they lack small details and are too regular.

I also retrained the regression model on a new Apoferritin dataset. The dataset was downsampled from 512×512 to 128×128. This was because we wanted to see if we could surpass the Nyquist limit through multi-frame super-resolution. Downsampling ensured that our bottleneck would be the Nyquist limit rather than other factors such as noise and reconstruction errors. Additionally, the model was trained on the sum of 4 normalized quality metrics created using different software. This is to provide a more reliable quality metric.

The following graph is the predictions and labels generated by the “residual” model, which is much better than before.

 

Categories
Uncategorized

7-16 New Model

Created a new network that contains mainly residual blocks with attention mechanism, with higher frequency data being fed into the network at later and later layers. Takes in 3 images at bandpass filtered at different frequencies, and normalizes then sums the scores to get the score used as the label.

I plan to incorporate projection images into the model once Xiaochen makes the projection dataset.

Categories
Uncategorized

7-14 Results

Used Xiaochen’s new filters on apoferritin dataset + dilation for kernels, improved loss to 1.1086106393833122 from 1.174222761643886 on the “residual network”

Did not work as well with the projection model: 1.0036585444960713 compared to 0.8044979748095374

Will start work on a new CNN architecture incorporating multiple scores and the attention mechanism