Abstract
U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Datasets F1-MSC, F2-GOWT1, F3-SIM, F4-HeLa, DIC1-HeLa, PC1-U373 and PC2-PSC are from the ISBI Cell Tracking Challenge 2015 (ref. 17). Information on how to obtain the data can be found at http://celltrackingchallenge.net/datasets.html, and free registration for the challenge is currently required. Datasets PC3-HKPV, BF1-POL, BF2-PPL and BF3-MiSp are custom and are available from the corresponding author upon reasonable request. Datasets for the detection experiments partially contain unpublished sample-preparation protocols and are currently not freely available. After protocol publication, datasets will be made available on an as-requested basis. Details on sample preparation for our life science experiments can be found in Supplementary Note 3 and the Life Sciences Reporting Summary.
Change history
25 February 2019
In the version of this paper originally published, one of the affiliations for Dominic Mai was incorrect: "Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany" should have been "Life Imaging Center, Center for Biological Systems Analysis, Albert-Ludwigs-University, Freiburg, Germany." This change required some renumbering of subsequent author affiliations. These corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.
References
Sommer, C, Strähle, C, Koethe, U. & Hamprecht, F. A. in Ilastik: interactive learning and segmentation toolkit in IEEE Int. Symp. Biomed. Imaging. 230–233 (IEEE: Piscataway, NJ, USA, 2011).
Arganda-Carreras, I. et al. Bioinformatics 33, 2424–2426 (2017).
Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 Vol. 9351, 234–241 (Springer, Cham, Switzerland, 2015).
Rusk, N. Nat. Methods 13, 35 (2016).
Webb, S. Nature 554, 555–557 (2018).
Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Sci. Rep. 7, 7860 (2017).
Weigert, M. et al. Nat. Methods https://doi.org/10.1038/s41592-018-0216-7 (2018).
Haberl, M. G. et al. Nat. Methods 15, 677–680 (2018).
Ulman, V. et al. Nat. Methods 14, 1141–1152 (2017).
Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. Nat. Methods 9, 671–675 (2012).
Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 3431–3440 (IEEE, Piscataway, NJ, USA, 2015).
Simonyan, K. & Zisserman, A. Preprint at https://arxiv.org/abs/1409.1556 (2014)
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016 Vol. 9901, 424–432 (Springer, Cham, Switzerland, 2016).
Jia, Y. et al. Preprint at https://arxiv.org/abs/1408.5093 (2014).
He, K., Zhang, X., Ren, S. & Sun, J. Preprint at https://arxiv.org/abs/1502.01852 (2015).
Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. & Zisserman, A. Int. J. Comput. Vis. 88, 303–338 (2010).
Maška, M. et al. Bioinformatics 30, 1609–1617 (2014).
Acknowledgements
This work was supported by the German Federal Ministry for Education and Research (BMBF) through the MICROSYSTEMS project (0316185B) to T.F. and A.D.; the Bernstein Award 2012 (01GQ2301) to I.D.; the Federal Ministry for Economic Affairs and Energy (ZF4184101CR5) to A.B.; the Deutsche Forschungsgemeinschaft (DFG) through the collaborative research center KIDGEM (SFB 1140) to D.M., Ö.Ç., T.F. and O.R., and (SFB 746, INST 39/839,840,841) to K.P.; the Clusters of Excellence BIOSS (EXC 294) to T.F., D.M., R.B., A.A., Y.M., D.S., T.L.T., M.P., K.P., M.S., T.B. and O.R.; BrainLinks-Brain-Tools (EXC 1086) to Z.J., K.S., I.D. and T.B.; grants DI 1908/3-1 to J.D., DI 1908/6-1 to Z.J. and K.S., and DI 1908/7-1 to I.D.; the Swiss National Science Foundation (SNF grant 173880) to A.A.; the ERC Starting grant OptoMotorPath (338041) to I.D.; and the FENS-Kavli Network of Excellence (FKNE) to I.D. We thank F. Prósper, E. Bártová, V. Ulman, D. Svoboda, G. van Cappellen, S. Kumar, T. Becker and the Mitocheck consortium for providing a rich diversity of datasets through the ISBI segmentation challenge. We thank P. Fischer for manual image annotations. We thank S. Wrobel for tobacco microspore preparation.
Author information
Authors and Affiliations
Contributions
T.F., D.M., R.B., Y.M., Ö.Ç., T.B. and O.R. selected and designed the computational experiments. T.F., R.B., D.M., Y.M., A.B. and Ö.Ç. performed the experiments: R.B., D.M., Y.M. and A.B. (2D), and T.F. and Ö.Ç. (3D). R.B., Ö.Ç., A.A., T.F. and O.R. implemented the U-Net extensions into caffe. T.F. designed and implemented the Fiji plugin. D.S. and M.S. selected, prepared and recorded the keratinocyte dataset PC3-HKPV. T.F. and O.R. prepared the airborne-pollen dataset BF1-POL. A.D., S.W., O.T., C.D.B. and K.P. selected, prepared and recorded the protoplast and microspore datasets BF2-PPL and BF3-MiSp. T.L.T. and M.P. prepared, recorded and annotated the data for the microglial proliferation experiment. J.D., K.S. and Z.J. selected, prepared and recorded the optogenetic dataset. I.D., J.D. and Z.J. manually annotated the optogenetic dataset. I.D., T.F., D.M., R.B., Ö.Ç., T.B. and O.R. wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Integrated supplementary information
Supplementary Figure 1 The U-Net architecture in the example of a 2D cell segmentation network.
(left) Input: An image tile with 540×540 pixels and C channels (blue box). (right) Output: The K-class soft-max segmentation with 356×356 pixels (yellow box). Blocks show the computed feature hierarchy. Numbers atop each network block: number of feature channels; numbers left to each block: spatial feature map shape in pixels. Yellow arrows: Data flow
Supplementary Figure 2 Separation of touching cells by using pixelwise loss weights.
(a) Generated segmentation mask with one-pixel wide background ridge between touching cells (white: foreground, black: background). (b) Map showing pixel-wise loss weights to enforce the network to separate touching cells
Supplementary Figure 3 Training data augmentation through random smooth elastic deformation.
(a) Upper left: Raw image; Upper right: Labels; Lower Left: Loss Weights; Lower Right: 20μm grid (for illustration purpose only) (b) Deformation field (black arrows) generated using bicubic interpolation from a coarse grid of displacement vectors (blue arrows; magnification: 5×). Vector components are drawn from a Gaussian distribution (σ = 10px). (c) Backwarp-transformed images of (a) using the deformation field
Supplementary information
Supplementary Text and Figures
Supplementary Figures 1–3 and Supplementary Notes 1–3
Supplementary Software 1
Caffe_unet binary package (GPU version with cuDNN, Recommended). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04, cuda 8.0.61 (https://developer.nvidia.com/cuda-80-ga2-download-archive) and cuDNN 7.1.4 for cuda 8.0 (https://developer.nvidia.com/rdp/cudnn-archive). At time of publication cuDNN download from the nVidia website requires free registration as nVidia developer
Supplementary Software 2
Caffe_unet binary package (GPU version no cuDNN). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04 and cuda 8.0.61 (https://developer.nvidia.com/cuda-80-ga2-downloadarchive)
Supplementary Software 3
Caffe_unet binary package (CPU version). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04
Supplementary Software 4
The source code difference (patch file) to the open source caffe deep learning software (https://github.com/BVLC/caffe.git commit hash d1208dbf313698de9ef70b3362c89cfddb51c520). Checkout the correspondingly tagged commit and apply the patch using “git apply” to get the full source for custom builds
Rights and permissions
About this article
Cite this article
Falk, T., Mai, D., Bensch, R. et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat Methods 16, 67–70 (2019). https://doi.org/10.1038/s41592-018-0261-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41592-018-0261-2
- Springer Nature America, Inc.
This article is cited by
-
A deep learning model for differentiating paediatric intracranial germ cell tumour subtypes and predicting survival with MRI: a multicentre prospective study
BMC Medicine (2024)
-
Automatic segmentation of fat metaplasia on sacroiliac joint MRI using deep learning
Insights into Imaging (2024)
-
A hierarchical fusion strategy of deep learning networks for detection and segmentation of hepatocellular carcinoma from computed tomography images
Cancer Imaging (2024)
-
An automated in vitro wound healing microscopy image analysis approach utilizing U-net-based deep learning methodology
BMC Medical Imaging (2024)
-
CohortFinder: an open-source tool for data-driven partitioning of digital pathology and imaging cohorts to yield robust machine-learning models
npj Imaging (2024)