The described pipeline is designed for the segmentation of electron microscopy datasets larger than gigabytes, to extract whole-cell morphologies. Once the cells are reconstructed in 3D, customized software designed around individual needs can be used to perform a qualitative and quantitative analysis directly in 3D, also using virtual reality to overcome view occlusion.
Serial sectioning and subsequent high-resolution imaging of biological tissue using electron microscopy (EM) allow for the segmentation and reconstruction of high-resolution imaged stacks to reveal ultrastructural patterns that could not be resolved using 2D images. Indeed, the latter might lead to a misinterpretation of morphologies, like in the case of mitochondria; the use of 3D models is, therefore, more and more common and applied to the formulation of morphology-based functional hypotheses. To date, the use of 3D models generated from light or electron image stacks makes qualitative, visual assessments, as well as quantification, more convenient to be performed directly in 3D. As these models are often extremely complex, a virtual reality environment is also important to be set up to overcome occlusion and to take full advantage of the 3D structure. Here, a step-by-step guide from image segmentation to reconstruction and analysis is described in detail.
The first proposed model for an electron microscopy setup allowing automated serial section and imaging dates back to 19811; the diffusion of such automated, improved setups to image large samples using EM increased in the last ten years2,3, and works showcasing impressive dense reconstructions or full morphologies immediately followed4,5,6,7,8,9,10.
The production of large datasets came with the need for improved pipelines for image segmentation. Software tools for the manual segmentation of serial sections, such as RECONSTRUCT and TrakEM211,12, were designed for transmission electron microscopy (TEM). As the whole process can be extremely time-consuming, these tools are not appropriate when dealing with the thousands of serial micrographs that can be automatically generated with state-of-the-art, automated EM serial section techniques (3DEM), such as block-face scanning electron microscopy (SBEM)3 or focused ion beam-scanning electron microscopy (FIB-SEM)2. For this reason, scientists put efforts into developing semi-automated tools, as well as fully automated tools, to improve segmentation efficiency. Fully automated tools, based on machine learning13 or state-of-the-art, untrained pixel classification algorithms14, are being improved to be used by a larger community; nevertheless, segmentation is still far from being fully reliable, and many works are still based on manual labor, which is inefficient in terms of segmentation time but still provides complete reliability. Semi-automated tools, such as ilastik15, represent a better compromise, as they provide an immediate readout for the segmentation that can be corrected to some extent, although it does not provide a real proofreading framework, and can be integrated using TrakEM2 in parallel16.
Large-scale segmentation is, to date, mostly limited to connectomics; therefore, computer scientists are most interested in providing frameworks for integrated visualizations of large, annotated datasets and analyze connectivity patterns inferred by the presence of synaptic contacts17,18. Nevertheless, accurate 3D reconstructions can be used for quantitative morphometric analyses, rather than qualitative assessments of the 3D structures. Tools like NeuroMorph19,20 and glycogen analysis10 have been developed to take measurements on the 3D reconstructions for lengths, surface areas, and volumes, and on the distribution of cloud points, completely discarding the original EM stack8,10. Astrocytes represent an interesting case study, because the lack of visual clues or repetitive structural patterns give investigators a hint about the function of individual structural units and consequent lack of an adequate ontology of astrocytic processes21, make it challenging to design analytical tools. One recent attempt was Abstractocyte22, which allows a visual exploration of astrocytic processes and the inference of qualitative relationships between astrocytic processes and neurites.
Nevertheless, the convenience of imaging sectioned tissue under EM comes from the fact that the amount of information hidden in intact brain samples is enormous and interpreting single section images can overcome this issue. The density of structures in the brain is so high that 3D reconstructions of even a few objects visible at once would make it impossible to distinguish them visually. For this reason, we recently proposed the use of virtual reality (VR) as an improved method to observe complex structures. We focus on astrocytes23 to overcome occlusion (which is the blocking of the visibility of an object of interest with a second one, in a 3D space) and ease qualitative assessments of the reconstructions, including proofreading, as well as quantifications of features using count of points in space. We recently combined VR visual exploration with the use of GLAM (glycogen-derived lactate absorption model), a technique to visualize a map of lactate shuttle probability of neurites, by considering glycogen granules as light-emitting bodies23; in particular, we used VR to quantify the light peaks produced by GLAM.
The method presented here is a useful step-by-step guide for the segmentation and 3D reconstruction of a multiscale EM dataset, whether they come from high-resolution imaging techniques, like FIB-SEM, or other automated serial sectioning and imaging techniques. FIB-SEM has the advantage of potentially reaching perfect isotropy in voxel size by cutting sections as thin as 5 nm using a focused ion beam, its FOV might be limited to 15-20 µm because of side artifacts, which are possibly due to the deposition of the cut tissue if the FOV exceeds this value. Such artifacts can be avoided by using other techniques, such as SBEM, which uses a diamond knife to cut serial sections inside the microscope chamber. In this latter case, the z resolution can be around 20 nm at best (usually, 50 nm), but the FOV might be larger, although the pixel resolution should be compromised for a vast region of interest. One solution to overcome such limitations (magnification vs. FOV) is to divide the region of interest in tiles and acquire each of them at a higher resolution. We have shown here results from both an SBEM stack-dataset (i) in the representative results-and a FIB-SEM stack-dataset (ii) in the representative results.
As the generation of larger and larger datasets is becoming increasingly more common, efforts in creating tools for pixel classification and automated image segmentation are multiplying; nevertheless, to date, no software has proven reliability comparable to that of human proofreading, which is therefore still necessary, no matter how time-consuming it is. In general, smaller datasets that can be downsampled, like in the case of dataset (ii), can be densely reconstructed by a single, expert user in a week, including proofreading time.
The protocol presented here involves the use of three software programs in particular: Fiji (version 2.0.0-rc-65/1.65b), ilastik (version 1.3.2 rc2), and Blender (2.79), which are all open-source and multi-platform programs and downloadable for free. Fiji is a release of ImageJ, powered by plugins for biological image analysis. It has a robust software architecture and is suggested as it is a common platform for life scientists and includes TrakEM2, one of the first and most widely used plugins for image segmentation. One issue experienced by many users lately is the transition from Java 6 to Java 8, which is creating compatibility issues; therefore, we suggest refraining from updating to Java 8, if possible, to allow Fiji to work properly. ilastik is a powerful software providing a number of frameworks for pixel classification, each one documented and explained on their website. The carving module used for the semi-automated segmentation of EM stacks is convenient as it saves much time, allowing scientists to reduce the time spent on manual work from months to days for an experienced user, as with a single click an entire neurite can be segmented in seconds. The preprocessing step is very intense from a hardware point of view, and very large datasets, like the SBEM stack presented here, which was 26 GB, require peculiar strategies to fit into memory, considering that one would acquire large dataset because cannot compromise field of view and resolution. Therefore, downsampling might not be an appropriate solution in this case. The latest release of the software can do the preprocessing in a few hours with a powerful Linux workstation, but the segmentation would take minutes, and scrolling through the stack would still be relatively slow. We still use this method for a first, rough segmentation, and proofread it using TrakEM2. Finally, Blender is a 3D modeling software, with a powerful 3D rendering engine, which can be customized with python scripts that can be embedded in the main GUI as add-ons, such as NeuroMorph and glycogen analysis. The flexibility of this software comes with the drawback that, in contrast to Fiji, for instance, it is not designed for the online visualization of large datasets; therefore, visualizing and navigating through large meshes (exceeding 1 GB) might be slow and not efficient. Because of this, it is always advisable to choose techniques that reduce mesh complexity but are careful not to disrupt the original morphology of the structure of interest. The remesh function comes in handy and is an embedded feature of the NeuroMorph batch import tool. An issue with this function is that, depending on the number of vertices of the original mesh, the octree depth value, which is related to the final resolution, should be modified accordingly. Small objects can be remeshed with a small octree depth (e.g. 4), but the same value might disrupt the morphology of larger objects, which needs bigger values (6 at best, to 8 or even 9 for a very big mesh, such as a full cell). It is advisable to make this process iterative and test the different octree depths if the size of the object is not clear.
As mentioned previously, one aspect that should be taken into account is the computational power to be dedicated to reconstruction and analysis, related to the software that is being used. All the operations shown in the representative results of this manuscript were obtained using a MacPro, equipped with an AMD FirePro D500 Graphics card, 64 GB of RAM, and an Intel Xeon E5 CPU with 8 cores. Fiji has a good software architecture for handling large datasets; therefore, it is recommended to use a laptop with a good hardware performance, such as a MacBook Pro with a 2.5 GHz Intel i7 CPU and 16 GB of RAM. ilastik software is more demanding in terms of hardware resources, in particular during the preprocessing step. Although downsampling the image stack is a good trick to limit the hardware requests from the software and allows the user to process a stack with a laptop (typically if it is below 500 pixels in x,y,z), we suggest the use of a high-end computer to run this software smoothly. We use a workstation equipped with an Intel Xeon Gold 6150 CPU with 16 cores and 500 GB of RAM.
When provided with an accurate 3D reconstruction, scientists can discard the original micrographs and work directly on the 3D models to extract useful morphometric data to compare cells of the same type, as well as different types of cells, and take advantage of VR for qualitative and quantitative assessments of the morphologies. In particular, the use of the latter has proven to be beneficial in the case of analyses of dense or complex morphologies that present visual occlusion (i.e., the blockage of view of an object of interest in the 3D space by a second one placed between the observer and the first object), making it difficult to represent and analyze them in 3D. In the example presented, an experienced user took about 4 nonconsecutive hours to observe the datasets and count the objects. The time spent on VR analysis might vary as aspects like VR sickness (which can, to some extent, be related to car sickness) can have a negative impact on the user experience; in this case, the user might prefer other analysis tools and limit their time dedicated to VR.
Finally, all these steps can be applied to other microscopy and non-EM techniques that generate image stacks. EM generates images that are, in general, challenging to handle and segment, compared with, for instance, fluorescence microscopy, where something comparable to a binary mask (signal versus a black background), that in principle can be readily rendered in 3D for further processing, often needs to be dealt with.
The authors have nothing to disclose.
This work was supported by the King Abdullah University of Science and Technology (KAUST) Competitive Research Grants (CRG) grant "KAUST-BBP Alliance for Integrative Modelling of Brain Energy Metabolism" to P.J.M.
Fiji | Open Source | 2.0.0-rc-65/1.65b | Open Source image processing editor www.fiji.sc |
iLastik | Open Source | 1.3.2 rc2 | Image Segmentation tool www.ilastik.org |
Blender | Blender Foundation | 2.79 | Open Source 3D Modeling software www.blender.org |
HTC Vive Headset | HTC | Vive / Vive Pro | Virtual Reality (VR) Head monted headset www.vive.com |
Neuromorph | Open Source | — | Collection of Blender Addons for 3D Analysis neuromorph.epfl.ch |
Glycogen Analysis | Open Source | — | Blender addon for analysis of Glycogen https://github.com/daniJb/glyco-analysis |
GLAM | Open Source | — | C++ Code For generating GLAM Maps https://github.com/magus74/GLAM |