Denoising acquired images using deep learning

Copy this notebook into a directory in your Google drive.

In this tutorial we will create a deep learning denoising model trained on data aquired by Pycro-Manager on your microscope. We will then used this denoising model to denoise images collected by Pycro-Manager in real time.

We will be running image aquisition and inference locally, and train on a Google Colab GPU instance, though if you have a fairly powerful GPU locally feel free to train locally.

The deep learning model used in this tutorial is N2V, which allows us to create a denoising algorithm without groud truth images by training on noisy images without clean targets. Check out how it works here and here.

Please install Pycro-Manager locally before running this Colab notebook.

Written by Ryan Mei, Henry Pinkard


Part 1: Connect to a local runtime

Open this notebook on your local computer. If you have not yet, install Pycro-Manager:

[2]:
!pip install pycromanager

Open micromanager and connect your microscope to your computer.


Part 2: Collecting Training Images

First verify you have a working installation of Pycro-Manager. Open Micro-Manager, select tools-options, and check the box that says Run server on port 4827 (you only need to do this once). Run:

[2]:
import matplotlib.pyplot as plt
import numpy as np

from pycromanager import Core

core = Core()
print(core)
<pycromanager.zmq_bridge._bridge.mmcorej_CMMCore object at 0x108dbfd30>

The output should look something like:

Out[1]: JavaObjectShadow for : mmcorej.CMMCore
It is important that the images we use to create the denoising use the same camera and imaging settings (gain, em-gain, read-out-parameters,…) as in your experiments.
We recommend that you aquire 3-10 images. If your camera is higher resolution, or if you are running this notebook without a GPU, it sometimes take more than 12 hours to train. In this tutorial we will be capturing images of a single scene though you may gain improved performance from capturing different samples and fields of view.

Aquisition

Adjust your microscope to the imaging settings (gain, read-out-parameters…) you plan to use in your experiments. Stage your sample. We will now collect the images and store them in a numpy array.
Let’s first try snapping a single image using the cell below. Make any adjustments needed.
[4]:
## Optional: Set microscope properties here. Here we set a property of
## the core itself, but same code works for device properties
# auto_shutter = core.get_property('Core', 'AutoShutter')
# core.set_property('Core', 'AutoShutter', 0)

core.snap_image()
tagged_image = core.get_tagged_image()
pixels = np.reshape(
    tagged_image.pix, newshape=[tagged_image.tags["Height"], tagged_image.tags["Width"]]
)
plt.imshow(pixels, cmap="magma")
[5]:
quantity = 6  # Adjust to number of images you would like to collect
dataRaw = []


def snap_and_get_image():
    core.snap_image()
    tagged_image = core.get_tagged_image()
    pixels = np.reshape(
        tagged_image.pix,
        newshape=[tagged_image.tags["Height"], tagged_image.tags["Width"]],
    )
    dataRaw.append(pixels)
    quantity -= 1


while quantity >= 0:
    snap_and_get_image()

dataRaw = np.array(dataRaw)

Let’s save our data in a numpy array.

[6]:
np.save("dataRaw.npy", dataRaw)

Part 3: Creating the Model

If you have a Nvidia GPU and would like to train the model locally, feel free to skip to the next block of code. Otherwise, we will connect to a Colab runtime to utilize a free GPU instance.

We want to enable GPU acceleration to speed up training. Under ‘Runtime’ dropdown in the left top bar, select ‘change runtime type’ and select ‘GPU’.

First disconnect from the local runtime using the dropdown in the top right, and switch the runtime back to ‘hosted’.

[7]:
% tensorflow_version 1.x # CSBDeep is built on Tensorflow v1 and will NOT work with v2.
% nvidia-smi # Check that we are connected to a GPU
import tensorflow as tf

print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices("GPU")))
[8]:
% pip install n2v # Install noise2void
import os

import numpy as np
from matplotlib import pyplot as plt  # Libraries for plotting
from matplotlib.image import imread, imsave  # For processing images
from n2v.internals.N2V_DataGenerator import N2V_DataGenerator
from n2v.models import N2V, N2VConfig
from PIL import Image

Upload dataRaw.npy from the directory you started your local runtime to the Colab notebook directory using the ‘file’ button on the left side-menu.

[9]:
imgs = np.load("dataRaw.npy")
testImg = imgs[0]
for im in range(1, len(imgs)):
    Image.fromarray(imgs[im]).save(str(im) + ".tif")
imgs = N2V_DataGenerator.load_imgs_from_directory("/content", filter="*.tif", dims="YX")
# If on local runtime use:
# imgs = np.load('dataRaw.npy')

Check that we can view our images.

[10]:
plt.figure(figsize=(14, 7))
plt.subplot(1, 2, 1)
plt.imshow(imgs[0][0, ..., 0], cmap="magma")
plt.subplot(1, 2, 2)
plt.imshow(imgs[6][0, ..., 0], cmap="magma")

We will now create training patches and validation patches from the images we collected. Feel free to change the shape as N2V can train off of abitrarily large patches.

[11]:
patches = []
patches = N2V_DataGenerator.generate_patches_from_list(
    imgs, shape=(64, 64), shuffle=True
)

divide = int(len(patches) / 8)
train_patches = patches[divide:]
val_patches = patches[:divide]

Let’s look at one of our training and validation patches.

[12]:
plt.figure(figsize=(14, 7))
plt.subplot(1, 2, 1)
plt.imshow(train_patches[0, ..., 0], cmap="magma")
plt.title("Training Patch")
plt.subplot(1, 2, 2)
plt.imshow(val_patches[0, ..., 0], cmap="magma")
plt.title("Validation Patch");

Let’s configure our model. We very strongly recommend that you not train for more than 120 epochs on Colab as the system will time out after 12 hours. Make sure not to close your browser or after 90 minutes this notebook’s data will be erased. For detailed documentation of the parameters and what they mean, check this out.

[13]:
config = N2VConfig(
    train_patches,
    unet_n_depth=3,
    unet_kern_size=3,
    train_steps_per_epoch=300,
    train_epochs=80,
    train_learning_rate=0.0005,
    train_loss="mse",
    batch_norm=True,
    train_batch_size=128,
    n2v_perc_pix=0.198,
    n2v_patch_shape=(64, 64),
    unet_n_first=96,
    unet_residual=True,
    n2v_manipulator="uniform_withCP",
)
vars(config)

Mount Google Drive. We will save our model to a folder in Drive to not loose it when we close this Colab notebook.

[14]:
from google.colab import drive

drive.mount("/content/gdrive")
[15]:
model_name = "n2v_fluorescence_microscopy"
model_dir = "/content/gdrive/My Drive/denoising_model"
# We are now creating our network model.
model = N2V(config=config, name=model_name, basedir=model_dir)

Time to train our model. Make sure not to close this notebook during training if you are using a hosted runtime. This may take a while, around 11 hours for 100 epochs with the provided settings.

[16]:
history = model.train(train_patches, val_patches)

Let’s test our fresh model on an image we collected earlier.

[17]:
pred = model.predict(testImg, axes="YX")
plt.figure(figsize=(14, 7))
plt.subplot(1, 2, 1)
plt.imshow(testImg, cmap="magma")
plt.title("Raw Image")
plt.subplot(1, 2, 2)
plt.imshow(pred, cmap="magma")
plt.title("Denoised");

Part 4: Testing our Model

Now let’s test using our algorithm and denoise images collected in real time using Pycro-Manager!

First, start and reconnect to a local runtime. Download the folder denoising_model from your Google Drive to the current working directory of your local runtime.

Let’s load our model.

[18]:
import matplotlib.pyplot as plt
import numpy as np
from n2v.models import N2V, N2VConfig

from pycromanager import Acquisition, multi_d_acquisition_events

model_name = "n2v_fluorescence_microscopy"
basedir = "./"
# We are now creating our network model.
model = N2V(config=None, name=model_name, basedir=basedir)

Create a Pycro-Manager image processor that applies the deep learning model we created to images.

[19]:
def img_process_fn(image, metadata):
    # Apply our algorithm to the collected image and return the result
    image = model.predict(img, axes="YX").astype(np.uint16)
    return image, metadata

Let’s aquire an image!

[20]:
directory_to_save_images = "/aquisitions_tmp"
with Acquisition(
    directory=directory_to_save_images,
    name="acquisition_1",
    image_process_fn=img_process_fn,
) as acq:
    events = multi_d_acquisition_events(num_time_points=10)
    acq.acquire(events)

To learn how to read your denoised images check this out!