Leveraging AI to Enhance Super-Resolution Confocal Microscopy
How a team of researchers successfully used artificial intelligence and machine learning to improve their confocal imaging
Yicong Wu, PhD, staff scientist at the National Institute of Biomedical Imaging and Bioengineering (NIBIB), discussed their recent landmark study focused on multiview confocal super-resolution microscopy.
As Lab Manager recently reported, Wu and colleagues successfully integrated multiple analytical approaches to increase resolution, spatial scale, duration, and depth of penetration of confocal imaging in a range of sample types. The group went on to apply artificial intelligence (AI) and machine learning (ML) strategies to better predict and resolve challenging images. The combined approach offers significant steps forward for the range, scale, and resolution of biological confocal imaging applications.
Q: Although confocal microscopy is a widely used powerful tool due its contrast and flexibility, there are several significant limitations and areas for improvement. Can you elaborate?
A: Yes, confocal microscopy remains the dominant workhorse in biomedical optical microscopy when imaging a wide variety of three-dimensional samples, but it has clear limitations. These drawbacks include substantial point spread function anisotropy (usually its axial resolution is two- to three-fold worse than lateral resolution, confounding the 3D spatial analysis of fine subcellular structures); spatial resolution limited to the diffraction limit; depth-dependent degradation in scattering samples leading to signal loss at distances far from the coverslip; and three-dimensional illumination and volumetric bleaching, which may rapidly diminish the pool of available fluorescent molecules and lead to unwanted phototoxicity.
Q: Can you summarize the major achievements of the study?
A: The spatial resolution, imaging duration, and depth penetration of confocal microscopy in imaging single cells, living worm embryos and adults, fly wings, and mouse tissues are improved with innovative hardware (multi-view microscopy, efficient line-scanning confocal microscopy) and state-of-the-art software (super resolution reconstruction, joint deconvolution, and deep learning techniques).
Q: What approaches were used to accomplish these advancements?
A: We achieved our improvements in performance via an integrated approach: 1) we developed a compact line-scanning illuminator that enabled sensitive, rapid, and diffraction-limited confocal imaging over a 175 x 175 mm2 area, which can be readily incorporated into multiview imaging systems; 2) we developed reconstruction algorithms that fuse three line-scanning confocal sample views, enabling ~two-fold improvement in axial resolution relative to conventional confocal microscopy and recovery of signal otherwise lost to scattering; 3) we used deep learning algorithms to lower the illumination dose imparted by confocal microscopy, enabling clearer imaging than light sheet fluorescence microscopy in living, light-sensitive, and scattering samples; 4) we used sharp, line illumination introduced from three directions to further improve spatial resolution along these directions, enabling better than ten-fold volumetric resolution enhancement relative to traditional confocal microscopy; 5) we showed that combining deep learning with traditional multiview fusion approaches can produce super-resolution data from single confocal images, providing a route to rapid, optically sectioned, super-resolution imaging with higher sensitivity and speed than otherwise possible.
Q: Can you discuss the novel triple-view SIM (structured illumination microscopy) image reconstruction technique used in the study and how this compares with tradition SIM in the context of super resolution microscopy?
A: In traditional SIM, the 2x resolution enhancement is achieved by reconstruction of multiple interference images. In our triple-view SIM, we obtained super-resolution 1D images using digital photon reassignment and joint deconvolution of the views, achieving triple-view 1D SIM. We also used deep learning to predict 1D super resolved images at six rotations per view, and jointly deconvolved the views to achieve triple-view 2D SIM. We demonstrated that our triple-view 1D and 2D SIM methods outperformed a commercial 3D SIM system when imaging relatively thick samples.
Q: The multifaceted approach used in this study produced significant enhancements in confocal imaging resolution and performance. Can you describe the imaging improvements as applied to a few of the more than 20 distinct fixed and live samples that were included in the study?
A: The biological results are not only visually striking, but also enable new quantitative assessments of intracellular structures and tissues. For example, we imaged Jurkat T cells expressing H2B-GFP and 3xEMTB-mCherry every five seconds for 200 time points and revealed nucleus squeezing and deformation as the cell spread on the activating surface. For thicker samples other than single cells, one example was densely labeled nerve ring region in a C. elegans larva, in which we obtained superior volumetric resolution in triple-view 2D SIM mode (253 x 253 x 322 nm3), more than ten-fold better than the raw confocal data (601 x 561 x 836 nm3).
Q: What were the impacts of AI and ML on the imaging improvements observed in the study?
A: Our work provides a blueprint for the integration of deep learning with fluorescence microscopy. We successfully deployed neural networks to denoise the raw confocal images, enabling lower illumination dose, and thus extending imaging duration. We also showed such networks could predict isotropic, super-resolution images, and improve imaging at depth.
Q: How do you see AI and ML impacting future confocal microscopy investigations, and in particular single cell imaging applications?
A: We believe that the combination of confocal imaging with deep learning allows much better imaging performance than confocal microscopy alone, and has great promise for improving spatial resolution, signal-to-noise, imaging speed, and imaging duration. We suspect the same method could also be profitably applied to other microscopes with sharp line-like illumination for single cell imaging, including lattice light-sheet microscopy, traditional and nonlinear SIM, and stimulated emission depletion microscopy with 1D depletion. Such microscopes are capable of improving spatial resolution in single cell studies, yet this improvement in resolution usually comes at a cost in terms of temporal resolution, signal, or phototoxicity. One caveat, however, is to remember that AI/ML generates predictions based on data the network has seen before. Although we obtained significant improvements when using AI, it is always important to remember that it is possible for these networks to generate predictions with artifacts—particularly if the input data differs significantly from the training data the network has already “seen.” More work is needed to make it easier to validate the output of such approaches, but we are very excited about the possibilities for fluorescence microscopy.
Yicong Wu, PhD, is a staff scientist at the NIBIB. Under the direction of Hari Shroff, PhD, in the section on High-resolution optical imaging, Wu’s investigations focus on the development of new imaging tools for use in biological and clinical research.