Learned adaptive illumination multiphoton microscopy¶
In this notebook, we demonstrate how to implement Learned Adaptive Multiphoton Illumination microscopy (LAMI), using Pycro-Manager and Micro-Magellan. Throughout the tutorial, we’ll use Napari for visualizing data. This technique enables automatic sample-dependent adjustment of excitation laser power in real time while imaging a sample on a 2-photon microscope in order to compensate for attenuation of fluorescence when imaging deep into intact tissue.
This notebook outlines all the steps for implementing this technique, including microscope control software, how to run experiments, how to analyze the data and train the machine learning model, and how to apply its predictions in real time when collecting new data. Each of these steps is implemented with entirely open-source software. The tutorial is entirely in Python, with the exception of the first part that deals with the hardware setup and may require modifying C/C++ code.
The tutorial aims to be as modular as possible, so that ideas/techniques can be drawn from it and applied to other applications.
Although running through the full tutorial in sequence is how one would set up this experiment in practice, we’ve provided demo data at various steps along the way both as a way to check that things work as expected, and to allow the sections to be run independently. This demo data is available here, along with the lami_helper.py file which contains some code that will be used in various places in the notebook
When imaging deep into a sample using 2-photon microscopy, excitation light focusing to different points in the sample will be subjected to different amounts of scattering, and the excitation laser power must be increased in order to maintain signal. Failing to increase sufficiently will lead to the loss of detectable fluorescence. Increasing too much subjects the sample to unnecessary photobleaching and photodamage, with the potential to disrupt or alter the biological processes under investigation.