What You Learn from This Chapter
This chapter aims to introduce modern ways to analyze fluorescence microscopy images, such as extending the image analysis workflow to tablet computers. Improvements in the technical characteristics of mobile devices provide increasingly viable options to supplement and expand the selection of microscopy image analysis tools. As an example, you will learn how to quantify the colocalization of fluorescence markers on a tablet computer without sacrificing the reliability of the method and leveraging the benefits of modern mobile computing.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Modern microscopy
- Image analysis
- Mobile computing
- Tablet computers
- Machine learning
- Super-resolution
- Colocalization
- Quantification
This chapter aims to introduce modern ways to analyze fluorescence microscopy images, such as extending the image analysis workflow to tablet computers. Improvements in the technical characteristics of mobile devices provide increasingly viable options to supplement and expand the selection of microscopy image analysis tools. As an example, you will learn how to quantify the colocalization of fluorescence markers on a tablet computer without sacrificing the reliability of the method and leveraging the benefits of modern mobile computing.
10.1 Introduction
Recent advances in mobile computing and its expansion to medical and research fields are facilitating the adoption of tablet computers to analyze microscopy images. The suitability of mobile devices to visualize and examine intrinsic cellular details is based on the growing technical prowess of mobile devices, particularly on increased screen sizes and enlarged storage options. Combined with wireless capabilities, long battery life, quick boot time, and genuine portability, mobile devices simplify group work and collaboration between researchers.
Although many fields are yet to benefit from mobile computing, the technology has reached the maturity point when it is not only suitable for demanding and computer-intensive image analysis tasks, but can even replace the desktop tools in many cases. Before describing the actual procedure of analyzing fluorescence microscopy images on a tablet computer, let us outline the benefits of doing it.
10.1.1 The Benefits of Using Tablet Computers for Analysis of Light Microscopy Images
10.1.1.1 Ease of Use
When analyzing images, we all wish to have as fuss-free user experience as possible. Conceptually, mobile operating systems are conceived to be easy to use and, compared to desktop operating systems, they are indeed easier to understand and navigate. The ease of use is multiplied by multi-touch interfaces, which rely on natural and efficient hand gestures.
10.1.1.2 Superior Engagement and Improved Work Efficiency
It is agreed that mobile computers provide better user engagement by offering a more fluid user experience, i.e. they work the way people think. This is because mobile devices are designed to combine the freedom of expression with the freedom of movement. The improved work efficiency of tablet computers is due to
-
1.
Speed. Tablet computers can be operated significantly faster. There is no need to locate a computer mouse or find the proper keyboard key. Users can just tap what is needed on the screen right away.
-
2.
User-friendliness. Pointing at something you want is an instinctive human gesture. That is why touch screens are so intuitive and require very little training to be used. Despite its technological complexity, the touchscreen interface is extremely easy to understand and use.
-
3.
Simultaneous usage. Hands can be used to express different gestures by catching, flicking, tilting, and forming any imaginable signs. In addition to the simultaneous usage of hands, users can also extract various information out of single means of output. Finger touches can vary in pressure sensitivity and angle.
10.1.1.3 More Comfortable Analysis
Since mobile devices are truly portable, they are easy to carry around. The average iPad, for example, is three times lighter than the average laptop, which makes a real difference in practical use.
The portability of tablet computers is enabled by cloud storage. Using the cloud, users can keep microscopy images on remote servers and wirelessly download them when needed. Cloud service ensures that files are kept up-to-date on all devices automatically.
10.1.1.4 Better Affordability
Mobile computers are less costly to acquire and have a longer lifespan. Even tops of the line iPads are usually significantly cheaper than desktop and laptop computers while delivering equal performance. As a rule, professional mobile apps are approximately ten times more affordable than similar software applications for the desktop platform as well. According to statistics, lifespan and buying cycle of iPads are at least several times longer than that of laptops and desktops. This makes purchasing mobile computers a wiser investment.
10.1.1.5 New Possibilities
Currently, technology innovations come to the mobile platform first. Technically, mobile devices were the first to offer high-resolution displays and 64-bit apps. In terms of pure processing power, iPad Pros, for example, surpassed many laptops. Modern machine learning tools are quickly coming to tablet computers, making them more intelligent and capable. Importantly, manufacturers of mobile devices provide advanced development kits to facilitate discoveries in medical and research fields at a scale and pace never seen before. These tools are usually distributed as open-source to encourage researchers and developers to collaborate and share their innovative technologies.
10.2 Performing Microscopy Image Analysis on a Tablet Computer
10.2.1 Analyzing Colocalization of Fluorescence Markers
Let us now look at how we can use a tablet computer, such as an iPad, to analyze the colocalization of fluorescence markers in practice.
Colocalization is a visual phenomenon defined as the presence of two or more color types of fluorescently-labeled molecules at the same location. The information on its appearance can help to understand the properties of interacting proteins [1,2,3, 4].
To enable image analysis on a tablet computer, we will use software that has versions for both desktop and mobile platforms. On a desktop computer, we will use the app CoLocalizer Pro for Mac, a free download from the site of CoLocalization Research Software: https://colocalizer.com/mac/. On a tablet computer, we will use the app CoLocalizer for iPad, a free download from App Store: https://geo.itunes.apple.com/app/colocalizer/id1116017542?mt=8.
Importantly, at any step of the protocol, the procedure can be switched to the desktop version of the software and then back to the mobile app, if necessary, without any interruption of the workflow.
10.2.2 Setting Up Microscopy Image Analysis on a Mobile Device
At first, we need to enable access to images on a mobile device. We can do it by importing images either directly using USB-C port on iPad Pros or using cloud servers. In the latter case, we perform the following steps (Fig. 10.1):
-
1.
Open an image on a desktop computer.
-
2.
Save the image to the cloud server. Importantly, cloud servers can be accessed using both macOS and Windows PC compatible computers.
-
3.
Re-open an image from the cloud server on a tablet computer.
After saving images to the cloud server, we will be able to access them from both the desktop and mobile platforms (Fig. 10.2).
10.2.3 Steps of the Workflow
The workflow is identical for the mobile and desktop platforms and consists of the following main steps:
-
1.
Open the image and transform it to super-resolution.
-
2.
Select ROI and intelligently reduce image noise.
-
3.
Calculate coefficients.
We start the analysis by opening an image and accessing the Tools option in the app navigation bar (Fig. 10.3).
10.2.4 The Importance of Image Resolution
The vast majority of research studying proteins labeled by fluorescent markers is performed using conventional microscopes. These microscopes provide image resolution which is determined by the physical property of light, such as diffraction, at approximately 250 nm. The limit of diffraction is above the size of most multi-protein complexes, which are in the range of 25–50 nm in diameter. The gap between image resolution and the size of protein complexes means that they remain indistinguishable when visualized. These protein complexes can be visualized using the so-called super-resolution microscopes, such as structured illumination microscopy (SIM), stimulated emission depletion (STED), photoactivated localization microscopy (PALM), or stochastic optical reconstruction microscopy (STORM). However, these sophisticated microscopes are still rare and very expensive to use and maintain.
10.2.4.1 Using a Machine Learning Model to Transform Conventional Fluorescence Images to Super-Resolution
Modern mobile research apps are capable of closing the gap of image resolution by applying machine learning (ML) models to transform conventional microscopy images to images with resolution comparable to that of super-resolution microscopes. ML is a revolutionary new technology that enables computers to learn and then use that knowledge while improving with experience [5,6,7]. ML is advantageous for solving complex technical and time demanding research tasks when humans are prone to errors.
We will employ a generative adversarial network (GAN)-based ML model to restore conventional fluorescence microscopy images by increasing their resolution while preserving and enhancing image details [8]. GAN-based models are unsupervised ML models that use less training data and are easier to create. They suit best to image transformation tasks. An increase in image resolution will reveal more structural details in the image and dramatically improve the reliability of colocalization analysis. The protocol also employs another ML model which is less complex and works according to supervised principle. It is applied for the task of image classification to reduce image background noise (see using ML Correct model below).
To transform an image to super-resolution, tap the ML Super Resolution icon in the app navigation bar (Fig. 10.4). The application of the ML model will increase the dimension and resolution of the transformed image three times (Fig. 10.5).
The use of the ML Super Resolution model is recommended for the vast majority of images with colocalization to be analyzed (see Current Limitations below).
10.2.5 Selecting a Region of Interest (ROI)
After transforming the image, we can proceed to select a smaller area on the image where we plan to perform the calculation of colocalization coefficients.
The selection of a region of interest (ROI) is a crucial step in performing quantification colocalization on any platform, including mobile. We select ROI to delimit the area of the user’s interest to contextualize colocalization analysis. It is necessary to select an area with as little contribution of surrounding pixels as possible. You can perform selection using Lasso, Polygon, Oval, and Rectangle types. For achieving the greatest precision in selecting complex biological objects, it is best to use Lasso and Polygon.
To select ROI, tap Tools > Select ROI. On the selection screen, type the selection type matching your research purpose and start selecting (Fig. 10.6). Selection is done by tapping on the screen and dragging the finger across the areas with colocalization. When the area is defined, tap the ending point of selection to close it.
After you selected an ROI, you can use the resizing points to adjust it closer to the object of your interest. When resizing a selection, you will see the label with the exact pixel size of the selected shape. For moving the selected ROI, tap and hold the selection shape (Fig. 10.7).
For analyzing the whole image, there is no need to use the selection tool for selecting any object. In this case, the entire image will be considered the ROI.
10.2.6 The Importance of Noise Reduction
Before estimating colocalization in an image, we need to reduce image noise. This is crucially important because by its nature fluorescence microscopy produces an inherently weak signal. As a result, raw fluorescence microscopy images are always degraded by noise. This noise appears as random background “crumbliness” throughout the image. Since most colocalization studies focus on tiny objects in the images, background noise can hide crucial structural details and hamper the reliability of image analysis.
Fluorescence images are impacted by two types of noise:
-
1.
Photon noise. Signal-dependent. Varies throughout the image. Comes from the emission and detection of the light. Follows a Poisson distribution, in which the standard deviation changes with the local image brightness.
-
2.
Read noise. Signal-independent. Depends on the microscope detector. Comes from inaccuracies in quantifying numbers of detected photons. Follows a Gaussian distribution, in which the standard deviation stays the same throughout the image.
The resulting noise, therefore, combines two independent noise types (photon and read) and its actual pixel value is effectively the sum of the two plus the true (noise-free) rate of photon emission.
-
1.
What types of image noise contain fluorescence microscopy images and why reducing noise is needed prior to image analysis?
10.2.6.1 Using an ML Model to Reduce Background Image Noise
In the CoLocalizer app, you can use a supervised classification ML model to intelligently reduce image noise. The procedure is called background correction. To use ML-powered background correction, tap Tools > Correct Background in the app navigation bar (Fig. 10.8) to open the Background Correction screen. On this screen, tap the ML Correct button to reduce background noise in the image (Fig. 10.9).
10.2.7 Analyzing Colocalization
Following noise reduction, we can now proceed to the main task of our procedure, estimating the ratio of colocalized pixels in the images. It is determined by the values called colocalization coefficients.
10.2.7.1 Calculating Colocalization Coefficients
Coefficients are calculated on the selected ROI or the whole image if no ROI was selected (see above). To calculate coefficients, tap Tools > Quantify Colocalization in the app navigation bar (Fig. 10.10) to open the Colocalization screen. On this screen, you will find an image scattergram (scatterplot) showing the distribution of pixels in the image according to the selected pair of channels, values of coefficients estimating colocalization, and the option to reveal colocalized pixels (Fig. 10.11).
10.2.7.2 Revealing Areas with Colocalization
It is also possible to reveal areas with colocalization on both the scattergram and the image by displaying colocalized pixels. To display colocalized pixels, simply switch the Reveal Colocalized Pixels switcher (Fig. 10.11).
10.2.8 Exporting Results
After we are done with coefficients calculations, we can export the results. Calculation results can be exported in the form of data and in the form of images.
To export the results, tap the Export button in the app navigation bar (Fig. 10.12). Choose what to export, either Data or Images (Fig. 10.13). Exported results can be saved either locally, on the iPad, or to the cloud server and can be accessed from both iPadOS and macOS versions of the CoLocalizer app as well from Android mobile devices and Windows computers.
10.2.9 Documentation
The use of mobile app CoLocalizer for iPad is thoroughly documented and can be accessed within the app by tapping Settings > CoLocalizer Help in the app navigation bar without interrupting workflow (Fig. 10.14) as well as by downloading a free eBook from Apple Books store: https://geo.itunes.apple.com/book/colocalizer-for-ipad/id1259842440?mt=11.
10.2.10 Current Limitations
Professional scientific image analysis on a mobile device is still a new technique which has several limitations:
-
1.
Image size. The biggest limitation of using tablet computers is related to the size of the images to be analyzed. For large images (100 MB and more) the use of mobile computing may not always be practical since downloading and synchronizing them via mobile networks will likely require a long time.
-
2.
Original image quality. Another limitation is related to the original quality of the images, particularly when employing the ML Super Resolution option. Out-of-focus, low resolution, and weak fluorescence images will be very difficult to restore reliably, and thus the application of ML model may not be always justified.
-
3.
Local artifacts. In some cases, restored images may contain local artifacts. These artifacts are usually detected when restoring complex multi-shaped 3D structures. When artifacts are observed, it is recommended to exclude them from the analysis by using the Select ROI tool.
-
2.
What are the limitations of professional scientific image analysis on a mobile device?
10.3 Interpretation of Results
To make the results of colocalization experiments understandable to a broad audience of researchers, we need to use a unified approach when interpreting them. This popular approach is based on the use of simple terminology introduced with the help of a set of five linguistic variables obtained using a fuzzy system model and computer simulation [9].
These variables are as follows: Very Weak, Weak, Moderate, Strong, and Very Strong. They are tied to specific values of coefficients and their use helps to ensure that the results of colocalization experiments are correctly reported and universally understood by all researchers studying localization of fluorescence markers.
-
3.
How the results of colocalization studies can be described in qualitative terms and why a unified approach to interpreting them is important?
10.4 Benchmark Datasets
The results of colocalization experiments can also be compared with reference images from the Colocalization Benchmark Source (CBS):
https://www.colocalization-benchmark.com.
CBS is a free collection of downloadable images for testing and validation of the degree of colocalization of fluorescence markers in microscopy studies. It consists of computer-simulated images with exactly known (pre-defined) values of colocalization ranging from 0% to 90%. They can be downloaded and arranged in the form of image sets as well as separately: https://www.colocalization-benchmark.com/downloads.html.
These benchmark images simulate real fluorescence microscopy images with a sampling rate of 4 pixels per resolution element. If the optical resolution is 200 nm, then the benchmark images correspond to the images with a pixel size of 50 nm. All benchmark images are free of background noise and out-of-focus fluorescence. The size of the images is 1024 × 1024 pixels.
We recommend using these benchmark images as reference points when quantifying colocalization in fluorescence microscopy studies both on mobile and desktop platforms.
Take-Home Message
This chapter describes the benefits and detailed step-by-step procedure of the protocol for using the mobile computing platform to analyze microscopy images, specifically the images with colocalization of fluorescence markers. The main points of the chapter are as follows:
-
The current technology of mobile computing and the state of development of mobile apps offer the possibility to perform intensive image analysis tasks previously feasible only on the desktop platform.
-
The use of mobile apps greatly improves the efficiency of scientific image analysis by allowing researchers to work in their preferred environment, including from home.
-
Meaningful analysis of microscopy images on tablet computers requires the existence of versions of image analysis apps for both mobile and desktop platforms. Mobile and desktop versions of the apps should have the same tools and settings and be capable of providing identical and continuous workflows.
-
Mobile apps now offer the latest technology innovations, including state-of-the-art ML-powered image restoration and classification models that dramatically increase the quality of images and the reliability of the analysis.
-
By saving exported results of image analysis in universally-compatible data and image file formats on cloud servers, mobile computing helps to bridge together different mobile and desktop computing platforms and extends collaboration between research teams working in different physical locations.
Answers
-
1.
Fluorescence microscopy images contain signal-dependent photon noise and signal-independent read noise. The actual pixel value of image noise combines these two noise types with the addition of the true (noise-free) rate of photon emission. The reduction of image noise facilitates the reliability of colocalization analysis.
-
2.
The biggest limitations of professional scientific image analysis on a mobile device are: (a) image size (too large images are not practical for downloading and synchronizing), (b) original image quality (low-quality images are difficult to restore using ML Super Resolution model), and (c) artifacts (in some cases restored images may contain the local artifacts).
-
3.
The results of colocalization studies can be presented using the following linguistic variables: Very Weak, Weak, Moderate, Strong, and Very Strong. The unified approach to describe them ensures proper reporting and universal understanding by all scientists working in the field.
References
Aaron JS, Taylor AB, Chew T-L. Image co-localization – co-occurrence versus correlation. J Cell Sci. 2018;131(3):jcs211847. https://doi.org/10.1242/jcs.211847.
Costes SV, Daelemans D, Cho EH, Dobbin Z, Pavlakis G, Lockett S. Automatic and quantitative measurement of protein-protein colocalization in live cells. Biophys J. 2004;86(6):3993–4003. https://doi.org/10.1529/biophysj.103.038422.
Dunn KW, Kamocka MM, McDonald JH. A practical guide to evaluating colocalization in biological microscopy. Am J Physiol Cell Physiol. 2011;300(4):C723–42. https://doi.org/10.1152/ajpcell.00462.2010.
Zinchuk V, Grossenbacher-Zinchuk O. Quantitative colocalization analysis of fluorescence microscopy images. Curr Protoc Cell Biol. 2014;62(1):4.19.11–14.19.14. https://doi.org/10.1002/0471143030.cb0419s62.
Sommer C, Gerlich DW. Machine learning in cell biology – teaching computers to recognize phenotypes. J Cell Sci. 2013;126(24):5529. https://doi.org/10.1242/jcs.123604.
Weigert M, Schmidt U, Boothe T, Müller A, Dibrov A, Jain A, Wilhelm B, Schmidt D, Broaddus C, Culley S, Rocha-Martins M, Segovia-Miranda F, Norden C, Henriques R, Zerial M, Solimena M, Rink J, Tomancak P, Royer L, Jug F, Myers EW. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat Methods. 2018;15(12):1090–7. https://doi.org/10.1038/s41592-018-0216-7.
Zinchuk V, Grossenbacher-Zinchuk O. Machine learning for analysis of microscopy images: a practical guide. Curr Protoc Cell Biol. 2020;86(1):e101. https://doi.org/10.1002/cpcb.101.
Wang H, Rivenson Y, Jin Y, Wei Z, Gao R, Günaydın H, Bentolila LA, Kural C, Ozcan A. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat Methods. 2019;16(1):103–10. https://doi.org/10.1038/s41592-018-0239-0.
Zinchuk V, Wu Y, Grossenbacher-Zinchuk O. Bridging the gap between qualitative and quantitative colocalization results in fluorescence microscopy studies. Sci Rep. 2013;3(1):1365. https://doi.org/10.1038/srep01365.
Li X, Zhang G, Wu J, Xie H, Lin X, Qiao H, Wang H, Dai Q. Unsupervised content-preserving image transformation for optical microscopy. bioRxiv. 2019; https://doi.org/10.1101/848077.
Acknowledgments
Some of the images used for illustration of the protocol steps were taken from the publicly available Image Gallery of Thermo Fischer Scientific: https://www.thermofisher.com/jp/ja/home/technical-resources/research-tools/image-gallery.html.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Zinchuk, V., Grossenbacher-Zinchuk, O. (2022). Modern Microscopy Image Analysis: Quantifying Colocalization on a Mobile Device. In: Nechyporuk-Zloy, V. (eds) Principles of Light Microscopy: From Basic to Advanced . Springer, Cham. https://doi.org/10.1007/978-3-031-04477-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-04477-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-04476-2
Online ISBN: 978-3-031-04477-9
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)