TR&D3: Image processing and analysis, with an emphasis on analysis of cell and tissue organization in support of modeling
Microscope images provide information about biological systems that is typically unavailable from any other source. Over the past thirty years the development of new microscopy methods, the advent of digital recording, and the development of high throughput microscopes created the ability to routinely collect terabytes of images containing detailed molecular and structural information. However, the breaking of the logjam in acquiring images has led to a bottleneck in analyzing them.
In TR&D3, we are addressing three specific, critical challenges in image processing and analysis: methods to convert diverse, micron resolution images to use in the modeling of cellular processes at molecular resolutions; paths to reconstructing the three-dimensional connectivity of neuronal tissue and relating the structure to neuronal activity; and tools to provide fast access to various types of processed images whose sizes strain current methods, especially for interactive visualization. Overcoming these challenges will enable a dramatic increase in the information that can be extracted from images and in the routine use of cutting-edge modeling tools by cell biologists, developmental biologists, and neurobiologists.
To meet these challenges, we are focusing on three areas:
- The development and distribution of software to build models of cellular and subcellular organization from fluorescent microscope images
This software will include interfaces which provide new capabilities to cell simulation software such as MCell, Virtual Cell, and Smoldyn. It will improve existing object-based models of protein distribution at a single time point, and include tools to estimate generative spatial models of single protein concentrations at a single time point, generative spatiotemporal models for single proteins, and dependencies between spatiotemporal models for different proteins.
- The creation of high efficiency registration and analysis algorithms for petavoxel image sets, especially image sets from serial section electron microscopy (ssEM).
These algorithms will be implemented across a wide range of computing platforms and will improve the detection of alignment points, leading toward the fully automatic assembly of datasets exceeding one petavoxel. These algorithms will also include semi-automated methods to align datasets from different sources (such as EM and optical microscopy), and to trace and segment neural pathways and other structures within registered ssEM datasets.
- The development of a new multi-platform framework called the Virtual Volume Filesystem (VVFS), enabling the efficient delivery of images from large datasets (~100 gigbytes or larger), especially the volumetric data produced by DBP5.
The VVFS will allow users to enter data in optimized VVFS formats and insert algorithms into the VVFS pipeline to create customized, on-the-fly transformations. Results will be delivered as virtual files to analysis programs on users’ computing platforms. As an example application that uses the system, a Virtual Volume Viewer will be implemented to provide interactive viewing of VVFS datasets while navigating in arbitrary 3D orientations.
Highlights from Year One of P41 support
- Major new release of CellOrganizer (v2.0)
- AlignTK 1.0.0 released
- Successful modeling and comparison of spatiotemporal patterns of protein distribution during T cell synapse formation (with DBP4). Manuscript in preparation.
- New collaborative project begun on modeling of neuronal differentiation. Extensive image collection created for PC12 cells during NGF-induced differentiation, and a generative, statistical model of changes in cell and nuclear shape and mitochondrial distribution created. Manuscript in preparation.
- More efficient diffeomorphic shape model learning software developed; being incorporated into the next release of CellOrganizer.
- Promising results on signal whitening approach to registration demonstrated. Robustness and correctness testing on cutting edge 10,000 section 100TB dataset. Performance testing and optimization are underway during further development.