Minimizing the number of Stainings for Segmentation using Deep Learning tools.

Abstract number
47
Presentation Form
Poster
Corresponding Email
[email protected]
Session
Poster Session 2
Authors
Romain Guiet (1), Olivier Burri (1), Audrey Menaesse (1), Arne Seitz (1)
Affiliations
1. Bioimaging & Optics Platform (BIOP), EPFL-SV, Station 15, 1015 Lausanne
Keywords

live-imaging, CBSDeep, StarDist, Cellpose, Fiji, cellprofiler


Abstract text

The acquisition of multiple channels is routine in modern light microscopy experiments in life sciences. Unfortunately, some of these channels are only used for segmentation. Stainings Used for Segmentation (SUS) are acquired with the sole purpose to define e.g. the nuclei or cytoplasm area of the cell. These experiment-independent SUS are needed to be in line with good practice in Image Analysis. They nevertheless reduce the number of possible Stainings Used for Experimentation (SUE) and therefore the possibility to study further analyts in the same experiment. This is in particular true for live cell experiments where preserving specimen viability and maximizing the number of SUE is even more challenging. 


Deep learning-based image methods have brought unprecedented precision and control to the previously daunting task of nuclear and cell segmentation. However, the variability of samples, preparations, stainings and imaging modalities still puts the hope of a universal segmentation model far in the future. Thus it is often the case that models need to be retrained to include the new variations, implying the generation of ground truth annotations which are time consuming. 

In parallel to segmentation, deep-learning based image-to-image models enable the possibility to translate one image modality onto another (e.g. for image restoration like CARE). By construction, this has the advantage of not necessitating human-annotated ground truth data, which lowers the difficulty of training new models. 


We present different strategies to achieve SUS reduction, either by combining DeepLearning with more standard image analysis tools, or end-to-end DeepLearning. We discuss two different approaches using “in silico channel” prediction (CARE) or direct object segmentation (using Cellpose and StarDist).