Artificial Intelligence based on Deep Learning (DL) is opening new horizons in biomedical research and promises to revolutionize the microscopy field. It is now transitioning from the hands of experts in computer sciences to biomedical researchers. Here, we introduce recent developments in DL applied to microscopy, in a manner accessible to non-experts. We give an overview of its concepts, capabilities and limitations, presenting applications in image segmentation, classification and restoration. We discuss how DL shows an outstanding potential to push the limits of microscopy, enhancing resolution, signal and information content in acquired data. Its pitfalls are discussed, along with the future directions expected in this field.

A hallmark of human intelligence is the ability to adapt previous knowledge to new situations and to recognize meaning in patterns. Replicating these abilities in non-human agents is the main goal of Artificial Intelligence (AI). Machine learning refers to a subset of AI methods based on extracting useful features from large sets of well-understood data and applying this information to make predictions or decisions on unseen data [1,2]. In the early 2010s, one type of machine learning, Deep Learning (DL), based on so-called neural networks (NNs), became increasingly prominent as a tool for image classification with super-human capabilities [3].

In contrast with classical algorithms which use a set of specifically designed rules to transform an input to a novel output (e.g. a median filter for image denoising) (Figure 1a), an NN is initially presented with a large set of paired input and desired output (respectively noisy and high-quality image, for instance) called the training dataset, from which it learns how to map each input into its corresponding desired output (Figure 1b-i). Therefore, a central difference with conventional algorithms is that the function performed by an NN is essentially determined by the training dataset itself. Once trained, the network can then be used to treat unseen input data to obtain the desired output in a process called inference (Figure 1b-ii).

Deep learning compared with classical computation.

Figure 1.
Deep learning compared with classical computation.

(a) Classical computer programs convert an input (e.g. noisy image of a cell) into a desired output (e.g. sharp image) via an algorithm with known rules and parameters (‘known routine’). On the other hand, NNs are trained with paired of corrupted and ground-truth images, e.g. a noisy and its equivalent high-quality image of a cell. During training (b-i), the untrained network (dark grey) learns to transform the inputs (left) into the output (right) by observing a large number of paired examples from the training dataset. After training (b-ii), the trained network (light grey) can be used to perform the task similarly to a conventional algorithm on novel data, therefore providing the output from new input data. The large black arrowheads represent dataflow.

Figure 1.
Deep learning compared with classical computation.

(a) Classical computer programs convert an input (e.g. noisy image of a cell) into a desired output (e.g. sharp image) via an algorithm with known rules and parameters (‘known routine’). On the other hand, NNs are trained with paired of corrupted and ground-truth images, e.g. a noisy and its equivalent high-quality image of a cell. During training (b-i), the untrained network (dark grey) learns to transform the inputs (left) into the output (right) by observing a large number of paired examples from the training dataset. After training (b-ii), the trained network (light grey) can be used to perform the task similarly to a conventional algorithm on novel data, therefore providing the output from new input data. The large black arrowheads represent dataflow.

Close modal

Although NNs were first envisioned in the 1950s [4], it took decades and the introduction of backpropagation [5,6] until the first NN reached significant performances in pattern recognition tasks in the late 1980s [79]. While inference with trained networks is generally fast, the training process can be computationally intensive, taking hours to days, especially for complex networks. The success of NNs in image recognition is thus closely linked to the exponential increase in the computational power of processing units, notably Graphical Processing Units (GPUs) and the rapid growth in the availability of large datasets since the 2000s [1,10]. In 2012, the first GPU-enabled NN called AlexNet [3] vastly outperformed the competition at the ImageNet image classification challenge, a seminal breakthrough for the AI field.

Since then, NNs have expanded, outsmarting humans in board games such as Go [11], enabling self-driving cars [12,13] and have significantly improved biomedical image analysis [2,1417]. In the latter field, their applications include automated, accurate classification and segmentation of cell images [18,19], extraction of structures from label-free microscopy imaging (artificial labelling) [20,21] and most recently image restoration (e.g. denoising and resolution enhancement) [2224]. Furthermore, as quantitative imaging is proving increasingly powerful for research, the need for methods able to analyse big-data with (super-)human accuracy has become a highly desirable goal. So, although DL in microscopy is yet to become widely available, the current growth in research efforts hints at a technology with the potential to fundamentally change how imaging data is analysed and how microscopy is carried out.

Here, we give non-specialist readers an overview of the potentials of NNs in the context of some of the major challenges of microscopy. We also discuss some of their current limitations and give an outlook on possible future applications in microscopy. While we briefly cover the basic mathematical principles used in NNs, we refer the reader to the review by Lecun et al. [1], which gives an extended perspective on machine learning and NNs and their historical development. Additionally, we recommend reviews which comprehensively discuss the application of AI in biomedical sciences and computational biology [16,2527].

NNs are complex networks of connected ‘neurons’ arranged in ‘layers’, a nomenclature inspired by the animal visual cortex [4]. A neuron can be interpreted as a mathematical function with adjustable parameters. A layer is commonly made of a group of neurons that takes the same input data and transfers its output to the next layer in the network. Each layer then provides a new representation of the data to the next layer with growing levels of abstraction. The transformation performed here may contain non-linear operations, such as a rectified linear unit (ReLU) which has been particularly successful for feature extraction tasks [28] since it allows to model much more complex data representations than simpler linear operations. The output of the last layer constitutes the output of the network. The more layers the network has, the deeper it is and the more complex the information it can extract [1,3].

An important form of NN, especially for tasks involving feature recognition in image data are convolutional neural networks (CNNs) [1,9]. Here, neurons extract image features by performing convolutions on the input image. These convolutional layers are often followed by so-called pooling layers which reduce the number of pixels in the image and therefore simplify the feature representations from the convolutional layers. This combination of successive feature extractions and data shrinkage leads to a simplified version of the input image, similar to a barcode, which the network learns to associate to the desired output [9,27].

Here, the NN learns to map from input to output by iteratively adjusting its neurons’ parameters such that it minimizes the difference between its own output and the desired output using the training dataset. This is a non-trivial task, especially for deep networks, as this can require iteratively estimating the effect of thousands or millions of parameters. This problem was efficiently addressed by the backpropagation method which allows the network errors to be projected back to every neuron's individual contribution [5,6]. Adjusting the neuron's parameters is then achieved by a method called gradient descent, i.e. changing the parameters such that the error decreases the fastest. Stochastic gradient descent, iteratively using a random example from the training dataset to estimate such parameter change as opposed to evaluating over the entire training dataset, is now the most common method [5,29,30]. After this iterative learning stage, the trained network can be applied to new data for which outputs are not available (Figure 1b-ii).

One issue with deep networks is their potential capacity to imprint the entire training dataset, a process called over-fitting, as opposed to learning generalizable features about the data. This may happen if the training dataset is too small or if the network is too deep (e.g. too many layers). In this case, the network will perform extremely well on the training dataset but will generalize very poorly with new unseen data. Therefore, during training, the network performance is monitored using an unseen validation dataset. Comparing validation and training performance is essential for model selection, i.e. choosing a network architecture that suits the dataset and does not overfit. In a final step, the network is tested on an unseen dataset which was neither contained in training nor validation sets to establish its performance.

Generally, the training dataset should contain many different examples of the desired outputs. For example, a network designed to categorize an animal should be trained with images showing the animal in different positions or environments. While data augmentation can be a powerful way to supplement training datasets [15], generating and curating the training dataset is often the major hurdle for the application of DL. Classification networks such as AlexNet [3] were trained on millions of annotated training instances [10], and while this is not universal [20,22,31], networks used for microscopy applications are often trained with thousands of examples to reach high prediction accuracy [24,32].

In recent years, important technical developments have improved or sped up the learning stage. This includes pre-trained networks (transfer learning [3335], which allows much smaller training datasets to be used), putting two NNs in competition with one another (such as Generative Adversarial Networks, GANs, where one network learns to generate fake datasets and another learns to discriminate fake from real) [24,36] or by allowing the use of very large non-curated datasets directly (self-learning or unsupervised learning) [3742].

Researchers in life sciences face several challenges when imaging biological specimens: How can phototoxicity and bleaching of fluorescent labels be balanced against good signal or resolution? How many fluorescent markers can be reliably imaged? And how can relevant and complex information be extracted from large image datasets, without tedious manual annotation and human bias? Aided by the increasing availability of high-throughput imaging, the new generation of DL methods in microscopy has the potential to address some of these problems.

In the following sections, we will present an overview of several exciting recent developments in AI and how they might address some of the current microscopy limitations mentioned above. Despite some conceptual overlaps between the methods presented here, we have separated them into four categories: object detection and classification (facilitating information extraction), image segmentation (allowing large and potentially unbiased high-throughput analysis), artificial labelling (tackling the limitations of the maximum number of fluorescent labels and that of phototoxicity) and image restoration (reducing phototoxicity, improving denoising or resolution).

An important goal for microscopy image analysis is to recognize and assign identities to relevant features on an image (Figure 2). Here, objects on an image can be identified and classified based on the NN analysis. For example, identifying mitotic cells in a tissue sample can be essential for cancer diagnosis. However, manual annotation is tedious, limited in throughput, and experts can introduce bias into such annotations by deciding which image features are important while ignoring others. Although several computational methods have been introduced to accelerate detection or classification tasks [4345], these still often rely on handcrafted parameters, chosen by researchers. The advantage of NNs is their capacity to learn the relevant image features autonomously. Classification NNs have therefore been extensively used in the biomedical imaging field, especially for cancer detection, particularly as large training sets have become more available [15,4650] or applied to high-throughput and high-content screens where it has shown expert-level recognition of subcellular features [32,5154]. A new approach in this area is to use unsupervised learning to identify subcellular protein localizations [38]. Lu et al. showed that unsupervised clustering of fluorescent proteins allows explorative studies on protein localization data (as there is no user bias in the input data) and also removes the requirement for manual labelling of a training dataset [38].

Object detection and classification.

Figure 2.
Object detection and classification.

Schematic of a trained NN which detects and classifies cells of different types or stages, e.g. to identify mitotic cells. A categorization map (right) can be obtained from a brightfield image (left). Here, the cell cycle stage of each cell is predicted. During training, the network was presented with a set of representative images of cells at different stages of the cell cycles that were manually annotated.

Figure 2.
Object detection and classification.

Schematic of a trained NN which detects and classifies cells of different types or stages, e.g. to identify mitotic cells. A categorization map (right) can be obtained from a brightfield image (left). Here, the cell cycle stage of each cell is predicted. During training, the network was presented with a set of representative images of cells at different stages of the cell cycles that were manually annotated.

Close modal

NNs have also shown their capacity to accurately identify cellular states from transmitted-light data, for example, differentiating cells based on cell-cycle stage [39], cells affected by phototoxicity [55] or stem cell-derived endothelial cells [56]. Determining such cellular identities previously required the introduction of an intracellular label, with the associated risk of affecting the physiology of the cell. These examples show how using NNs could be a less invasive method to identify cell fate or identity.

Segmentation is the identification of image regions that are part of specific cellular or subcellular structures and often is an essential step in image analysis (Figure 3). In this case, unlike the classification approach, the NN identifies whether each pixel belongs to a category of structure, typically defined as background vs. signal. A drawback of some existing segmentation platforms [43,45,57] is their need for user-based fine-tuning and manual error-removal, requiring time and expertise or adding human bias [58]. In multiple studies, CNNs have outperformed classical approaches in terms of accuracy and generalization [18,5861], especially when performing cell segmentation in co-cultures of multiple cell types [58]. In the context of histopathology, CNNs have been successfully used to segment colon glands [6266], breast tissues [67,68] and nuclei [69] outperforming non-DL approaches. Naturally, there is overlap between the challenges of classification and segmentation, hence, segmentation is often used with subsequent classification and can even improve the accuracy of classification [49,70].

Image segmentation.

Figure 3.
Image segmentation.

Schematic of a trained NN that produces segmentation masks from brightfield images. Given a brightfield input image of cells (left), the network assigns pixel values to the segmentation mask corresponding to single cells against the background (right). During training, the network was presented a set of brightfield images that were manually segmented.

Figure 3.
Image segmentation.

Schematic of a trained NN that produces segmentation masks from brightfield images. Given a brightfield input image of cells (left), the network assigns pixel values to the segmentation mask corresponding to single cells against the background (right). During training, the network was presented a set of brightfield images that were manually segmented.

Close modal

The segmentation field has also pioneered a network architecture called U-net [18] with wider importance in microscopy, especially when both input and output of the NN are an image (image-to-image algorithm). This U-net architecture uses many convolution/pooling layers (the encoder), followed with many layers of de-convolution/upsampling (the decoder) [18,59]. The encoder learns the main features of the image and the decoder reassigns them to different pixels of the image. While initially used for segmentation tasks, these architectures can be adapted to other image-to-image transformations (as opposed to simple classification of the image), making them some of the most important networks for microscopy applications today [17,20,22,69,71].

The direct observation of specific structures in cells using light microscopy typically requires the introduction of labels, either by genetic labelling or chemical staining, which can disturb the biological system. Additionally, fluorescence microscopy, especially when using laser illumination, is inherently more phototoxic to cells than transmitted-light imaging [72,73]. Addressing these limitations, two studies using CNNs have shown that specific cellular structures, such as nuclear membrane, nucleoli, plasma membranes and mitochondria, can be extracted by NNs from label-free images [20,21]. While the task of artificial labelling is similar to segmentation, the main difference in this approach lies in the creation of the training dataset which does not require to be hand-labelled. Instead, the training set contains paired images obtained from brightfield and fluorescence modalities of the same cells. The networks then learn to predict a fluorescent label from transmitted light or EM images, alleviating the need to acquire the corresponding fluorescence images (Figure 4). This capability is especially useful when performing long-term, live-cell imaging where low phototoxicity acquisitions are highly advantageous. Interestingly, the networks achieved high accuracy using a training dataset of only 30–40 images [20] and were able to identify dying cells or distinguish different cell types and subcellular structures [21]. Christiansen et al. [21] also demonstrated their network's ability for transfer learning, allowing a pre-trained network to be applied between different microscopes and labels, highlighting the versatility of these networks’ performance. However, the lack of good understanding about the origin of the features that the networks are able to produce from the label-free modalities has generated some scepticism and fuelled debate around artificial labelling.

Artificial labelling.

Figure 4.
Artificial labelling.

Schematic of a trained NN that labels cellular structures from images of unstained cells e.g. cells imaged in brightfield. Given an input of a single cell imaged under e.g. transmitted light (left), the network predicts the fluorescence signal for specific structures in the cell (right) such as membrane (shown in red), nucleus (shown in dark blue) or endoplasmic reticulum (shown in green). This prediction of specific structures is made possible by presenting the network with images of cells labelled with fluorescent markers and their respective brightfield images during training.

Figure 4.
Artificial labelling.

Schematic of a trained NN that labels cellular structures from images of unstained cells e.g. cells imaged in brightfield. Given an input of a single cell imaged under e.g. transmitted light (left), the network predicts the fluorescence signal for specific structures in the cell (right) such as membrane (shown in red), nucleus (shown in dark blue) or endoplasmic reticulum (shown in green). This prediction of specific structures is made possible by presenting the network with images of cells labelled with fluorescent markers and their respective brightfield images during training.

Close modal

The amount and quality of features which can be extracted from a microscopic image are limited by fundamental constraints inherent to all optical set-ups: signal-to-noise ratio (SNR) and resolution. Overcoming these limitations constitutes a central goal in microscopy. In particular, super-resolution microscopy (SRM) [7478] now allows imaging of cellular structures at the nanoscale using light microscopy. However, phototoxicity, bleaching and low temporal resolution still limit the capacity to achieve high-resolution long-term imaging in living specimens. Recently, several research groups have proposed CNN methods addressing some of these issues [22,23].

For such networks, training datasets consist, for instance, of paired images acquired at low and high SNR, respectively and the network learns to predict a denoised (high SNR) image from a noisy input (low SNR). This approach was demonstrated by Weigert et al. [22] with their content-aware image restoration (CARE) methodology, on the highly photosensitive organism Schmidtea mediterranea which allowed a 60-fold decrease in illumination dose, thus enabling longer and more detailed observation of this organism in vivo. CARE also demonstrated the successful restoration of axial resolution in deep microscopy sections, performing better than conventional reconstruction approaches, such as deconvolution. Furthermore, CARE was able to reconstruct SRM images from diffraction-limited images, using the Super-Resolution Radial Fluctuation (SRRF) method as a reference [7880]. Similarly, SRM images can be obtained from conventional confocal microscopy images using STimulated Emission Depletion (STED) microscopy to acquire the high-resolution training dataset [24].

Given the difficulties of creating large annotated training sets, different unsupervised learning methods for image restoration requiring no labelled training data have recently been explored [41,42]. Here, a network learns image denoising on a dataset of noisy images alone. While these methods may not always reach the performance of networks trained with ground-truth data [41], this represents an interesting avenue for tasks where large training sets are difficult or impossible to assemble.

CNNs have also recently generated interest in the single-molecule localization microscopy (SMLM) field. All available studies were published within the last year, by independent groups, suggesting that the potential of AI for SRM is increasingly recognized in the community [23,8184]. Applying sophisticated network architectures, with combinations of widefield and SMLM data as inputs [23,81], the networks are able to directly map sparse SMLM data of either microtubules, mitochondria or nuclear pores directly into their SRM output images. This demonstrates the strength of CNNs for pattern recognition in redundant data, like SMLM data where only a few frames may suffice to reconstruct an SRM image. Interestingly, some of these algorithms require no parameter tuning or specific knowledge about the imaged structures [81]. Especially, for high emitter density, this is advantageous over conventional SMLM reconstruction algorithms which can be time-consuming. However, the means by which an NN can learn to produce SRM images from sparse or widefield data remain a heavily debated topic in the field of both microscopy and AI.

Other studies have used a different approach to SMLM reconstruction by making use of the intrinsic properties of SMLM data [82,83]. Here, networks are trained to detect the spatial positions of fluorophores from SMLM input images, similar to a typical SMLM algorithm. This approach partially circumvents the controversy because the reconstructed images are therefore more similar to standard SMLM reconstructions making the resolution improvement easier to interpret. While achieving similar accuracy to state-of-the-art SMLM algorithms [85], a main achievement of DL for SMLM is the reconstruction speed with which super-resolved images can be produced. In several studies, this was improved by several orders of magnitude compared with conventional reconstruction algorithms [23,81,82,84].

AI is transforming microscopy both by allowing human or super-human performances for many image analysis tasks and as an automated high-performance tool for big-data analysis [32,53,58] (Table 1). However, while performance, versatility and speed of DL are likely to continue increasing, there are significant challenges which will not be solved by improved processing units.

Table 1
Selected publications from the recent years applying or developing deep learning to microscopy
AuthorsYearDetailsReference
Segmentation/classification 
 Ning et al. 2005 Classification and segmentation of tissues in stages of C. elegans development [61
 Ciresan et al. 2012 Segmentation of neuronal membranes in EM [60
 Ciresan et al. 2013 Mitosis detection in breast cancer [15
 Long et al. 2015 Introduction of fully convolutional networks (fCNN) for segmentation tasks [59
 Ronneberger et al. 2015 U-net: significant increase in efficiency for image-to-image network [18
 Kainz et al. 2015 Colon gland segmentation, network outperforms handcrafted feature detection [62
 Kraus et al. 2016 High-throughput classification and segmentation in yeast [52
  Xu et al. 2016 Accelerated cell detection in very large images and images with large cell numbers [46
 Dürr et al. 2016 Phenotyping of over 40 000 drug-treated single-cell images [54
 Van Valen et al. 2016 Efficient cell–cell segmentation in live-cell imaging [58
 Richmond et al. 2017 Identification of cells showing damage from phototoxicity [55
 Kraus et al. 2017 Identification of yeast strains and mutants and subcellular protein localizations [53
 Esteva et al. 2017 Expert-level classification of skin lesions with mobile phone applicable CNN [14
 Pärnamaa and Parts 2017 High-throughput classification of protein localization in yeast [32
 Godinez et al. 2017 Phenotyping cells after drug treatment, organelle identification [51
 Eulenberg et al. 2017 Identification of cell-cycle phases and differences in disease stages [39
 Kusumoto et al. 2018 Detection of epithelial cells derived from iPSCs [56
 Naylor et al. 2018 Application of U-net for nuclei segmentation in histopathology sections [69
 Lu et al. 2018 Unsupervised learning for protein localization prediction, in human and yeast cells [38
 Falk et al. 2018 U-net as ImageJ plugin for non-experts, includes pre-trained network [19
Artificial labelling 
 Christiansen et al. 2018 Label Prediction in fixed and live cells [21
 Ounkomol et al. 2018 3D label prediction in live-cell, IF and EM images [20
Image restoration/super resolution  
 Nehme et al. 2018 SMLM images from diffraction-limited input [81
 Boyd et al. 2018 Identifies localization of fluorophores from STORM single frames [82
 Ouyang et al. 2018 SMLM reconstruction using very small number of frames to predict SR image [23
 Nelson and Hess 2018 PALM reconstruction network trained directly on the image to be analysed [83
 Weigert et al. 2018 Denoising and resolution enhancement in different organisms and cell types [22
 Li et al. 2018 SMLM localization by deep learning and artefact removal by statistical inference [84
 Krull et al. 2018 Image denoising using unsupervised learning on purely noisy input images [41
 Wang et al. 2019 Conversion of low NA to high NA or diffraction limited to STED resolution images [24
AuthorsYearDetailsReference
Segmentation/classification 
 Ning et al. 2005 Classification and segmentation of tissues in stages of C. elegans development [61
 Ciresan et al. 2012 Segmentation of neuronal membranes in EM [60
 Ciresan et al. 2013 Mitosis detection in breast cancer [15
 Long et al. 2015 Introduction of fully convolutional networks (fCNN) for segmentation tasks [59
 Ronneberger et al. 2015 U-net: significant increase in efficiency for image-to-image network [18
 Kainz et al. 2015 Colon gland segmentation, network outperforms handcrafted feature detection [62
 Kraus et al. 2016 High-throughput classification and segmentation in yeast [52
  Xu et al. 2016 Accelerated cell detection in very large images and images with large cell numbers [46
 Dürr et al. 2016 Phenotyping of over 40 000 drug-treated single-cell images [54
 Van Valen et al. 2016 Efficient cell–cell segmentation in live-cell imaging [58
 Richmond et al. 2017 Identification of cells showing damage from phototoxicity [55
 Kraus et al. 2017 Identification of yeast strains and mutants and subcellular protein localizations [53
 Esteva et al. 2017 Expert-level classification of skin lesions with mobile phone applicable CNN [14
 Pärnamaa and Parts 2017 High-throughput classification of protein localization in yeast [32
 Godinez et al. 2017 Phenotyping cells after drug treatment, organelle identification [51
 Eulenberg et al. 2017 Identification of cell-cycle phases and differences in disease stages [39
 Kusumoto et al. 2018 Detection of epithelial cells derived from iPSCs [56
 Naylor et al. 2018 Application of U-net for nuclei segmentation in histopathology sections [69
 Lu et al. 2018 Unsupervised learning for protein localization prediction, in human and yeast cells [38
 Falk et al. 2018 U-net as ImageJ plugin for non-experts, includes pre-trained network [19
Artificial labelling 
 Christiansen et al. 2018 Label Prediction in fixed and live cells [21
 Ounkomol et al. 2018 3D label prediction in live-cell, IF and EM images [20
Image restoration/super resolution  
 Nehme et al. 2018 SMLM images from diffraction-limited input [81
 Boyd et al. 2018 Identifies localization of fluorophores from STORM single frames [82
 Ouyang et al. 2018 SMLM reconstruction using very small number of frames to predict SR image [23
 Nelson and Hess 2018 PALM reconstruction network trained directly on the image to be analysed [83
 Weigert et al. 2018 Denoising and resolution enhancement in different organisms and cell types [22
 Li et al. 2018 SMLM localization by deep learning and artefact removal by statistical inference [84
 Krull et al. 2018 Image denoising using unsupervised learning on purely noisy input images [41
 Wang et al. 2019 Conversion of low NA to high NA or diffraction limited to STED resolution images [24

This table covers the main four themes where AI has provided solutions to some of the major limitations of microscopy in the recent years.

Fundamentally, the task carried out by the NN as well as its performance is determined by the quality of the training dataset. So, any bias present in the training dataset (commonly introduced by the user at the selection level) will be subsequently incorporated in the network. This highlights the need for detailed data curation which depends heavily on the task at hand. For instance, in the case of a classification task, under-represented populations might be less accurately classified, or a model could overfit to the training examples. For the training dataset to cover a representative set, it is often important for it to contain thousands to millions of examples. In the absence of a robust training dataset, a user should either consider selecting a different model architecture or even alternatives to DL which exist in the form of other machine learning approaches or classical computer programs [27].

Another frequently raised concern in the microscopy community over DL is how much network outputs can be trusted to represent the underlying data. This is a real concern since CNNs have been observed to cause image hallucinations [86] or to fail catastrophically simply as a result of minute changes in the image [87]. To address this issue, several groups have assessed the presence of artefacts in their network output images, notably using the SQUIRREL (Super-resolution QUantitative Image Rating and Reporting of Error Locations) approach [2224,80,84,88]. While this may identify the presence of artefacts, it does not address the underlying problem that it is difficult to interpret how CNN networks produce their output from the image input, especially due to the abstraction of data representation in deep networks. This lack of interpretability of network outputs is particularly concerning in the case of resolution enhancement, where it is not clear what information a CNN can extract from a diffraction-limited image to achieve a non-diffraction-limited image and how DL algorithms achieve this without producing significantly more artefacts than standard algorithms [22,24]. Similar concerns exist for artificial labelling, as it may prove challenging to interpret the difference between signal and hallucinations of the network. Besides issues of interpretability, there are other anecdotal examples where networks have ‘cheated’ their way to high performance, e.g. by using undesirable features such as empty space to identify dead cells [55] or by identifying patterns in the ordering of the training set, but not in the images themselves [89]. This shows how much of the performance of DL methods relies on the choice and curation of training datasets.

Furthermore, the design of CNN architectures has been referred to as ‘notorious as an empirical endeavour’ [21]. Choosing network hyperparameters such as network depth, number of neural connections, learning rate and other hand-coded features of NNs [32,83,90,91] and the necessary hardware often require in-depth technical know-how, which limits accessibility for many potential users in the life sciences.

Nevertheless, AI has great enabling potential for microscopy, given super-human performance in classification tasks and image reconstruction. Hence, the issues discussed above should not discourage the use of NNs as a research tool but be a reason for caution when interpreting the performance of NNs, as for any computational analysis tool.

A rapidly increasing number of publications using DL in microscopy suggests that this technology can be a versatile and powerful tool to address some significant problems in biomedical imaging. However, the delay between developments and their applications means that some areas of AI research have not yet been widely translated to microscopy. For example, transfer learning is an area which will likely become more widely investigated, allowing the use of pre-trained networks to carry out a new task, forms of which are only starting to become available [19]. Finding methods to reuse NNs robustly on multiple different tasks, different image sizes or images taken on different microscopes would make DL a more flexible and usable approach for image analysis than is currently possible. Importantly, it would reduce the need for large training datasets and shorten the training time needed for new tasks. It could therefore lower the accessibility barrier of the approach and minimize the need for users to be fully familiar with NN specifics. In turn, this would allow DL to become a more widely used tool within life sciences, rather than a method that needs expert knowledge. Additionally, we expect that the AI field will develop tools to inspect and detect network failures which would build trust and establish the role that AI can and cannot play in modern research. We can also envisage new AI-enabled technologies that allow integrated microscopy platforms [92] to be controlled by an artificial agent, therefore optimizing microscopy at the image acquisition level.

Perspectives
  • Importance of the field. The field of Deep Learning (DL) applied to microscopy shows incredible promises to transform the way we acquire and analyse our microscopy data. Historically developed to automate tedious image segmentation and classification in biomedical images, it is beginning to be used in many imaging tasks, identifying subcellular features, allowing recovery of high-quality images from noisy data or specific cellular labels from unlabelled specimens.

  • Current state of the field. Applications of DL are currently being developed by expert computer scientists who can deploy the large computing resources required for training these networks. However, these resources are extremely versatile since typical architectures of network (such as U-net architectures) can be used for numerous tasks. Therefore, a new limitation has emerged and lies in the generation and curation of the datasets necessary to train the networks.

  • Future directions. The design, implementation and use of DL to microscopy is bound to be democratized largely thanks to the availability of hardware and software packages making these accessible. There, however, remain concerns about the biases built into networks due to, e.g. curation of training data, catastrophic failures of network, which remain to be studied in detail. The field is still undergoing an exponential development and many approaches developed for robotics or computer vision will likely permeate within biomedical research, creating new opportunities for researchers in the life sciences.

AI

Artificial Intelligence

CARE

Content-Aware image REstoration

CNNs

Convolutional Neural Networks

DL

Deep Learning

GPUs

Graphical Processing Units

NNs

Neural Networks

ReLU

Rectified Linear Unit

SMLM

Single-Molecule Localization Microscopy

SNR

Signal-to-Noise Ratio

SRM

Super-Resolution Microscopy

SRRF

Super-Resolution Radial Fluctuations

STED

STimulated Emission Depletion

This work was funded by grants from the UK Biotechnology and Biological Sciences Research Council [BB/M022374/1; BB/P027431/1; BB/R000697/1; BB/S507532/1] (R.H. and R.F.L.), the UK Medical Research Council [MR/K015826/1] (R.H.), the Wellcome Trust [203276/Z/16/Z] (R.H.), Core funding to the MRC Laboratory for Molecular Cell Biology, University College London [MC_UU12018/7]. L.C. supported by a 4-year MRC Research Studentship.

We would like to thank Simon F. Nørrelykke (ETH Zürich, Switzerland) and Alex Lu (University of Toronto, Canada) for kindly suggesting corrections to this manuscript.

The Authors declare that there are no competing interests associated with the manuscript.

1
LeCun
,
Y.
,
Bengio
,
Y.
and
Hinton
,
G.
(
2015
)
Deep learning
.
Nature
521
,
436
444
2
Chartrand
,
G.
,
Cheng
,
P.M.
,
Vorontsov
,
E.
,
Drozdzal
,
M.
,
Turcotte
,
S.
,
Pal
,
C.J.
et al (
2017
)
Deep learning: a primer for radiologists
.
Radiographics
37
,
2113
2131
3
Krizhevsky
,
B.A.
,
Sutskever
,
I.
and
Hinton
,
G.E.
(
2012
)
Imagenet classification with deep convolutional neural networks
.
Adv. Neural Inf. Process. Syst.
25
,
4
Rosenblatt
,
F.
(
1958
)
The perceptron : a probabilistic model for information storage and organization
.
Psychol. Rev.
65
,
386
408
5
Rumelhart
,
D.E.
,
Hinton
,
G.E.
and
Williams
,
R.J.
(
1986
)
Learning representations by back-propagating errors
.
Nature
323
,
533
536
6
Parker
,
D.
(
1985
)
Learning-logic: TR-47,
MIT-Press
,
Cambridge
7
LeCun
,
Y.
(
1989
)
Generalization and network design strategies
.
Connection. Perspect.
19
,
143
155
8
Le Cun
,
Y.
,
Boser
,
B.
,
Denker
,
J.S.
,
Howard
,
R.E.
,
Habbard
,
W.
,
Jackel
,
L.D.
et al (
1990
)
Handwritten digit recognition with a back-propagation network
.
Adv. Neural Inf. Process Syst.
396
404
9
LeCun
,
Y.
,
Boser
,
B.
,
Denker
,
J.S.
,
Henderson
,
D.
,
Howard
,
R.E.
,
Hubbard
,
W.
et al (
1989
)
Backpropagation applied to handwritten Zip code recognition
.
Neural Comput.
1
,
541
551
10
Deng
,
J.A.
,
Dong
,
W.I.
,
Socher
,
R.
,
Li
,
L.-J.A.
,
Li
,
K.I.
and
Fei-Fei
,
L.
(
2009
).
ImageNet: a large-scale hierarchical image database
.
2009 IEEE Conf. Comput. Vis. Pattern Recognit.
,
248
255
11
Silver
,
D.
,
Schrittwieser
,
J.
,
Simonyan
,
K.
,
Antonoglou
,
I.
,
Huang
,
A.
,
Guez
,
A.
, et al (
2017
)
Mastering the game of Go without human knowledge
.
Nature
550
,
354
359
12
Bojarski
,
M.
,
Del Testa
,
D.
,
Dworakowski
,
D.
,
Firner
,
B.
,
Flepp
,
B.
,
Goyal
,
P.
et al. (
2016
).
End to end learning for self-driving cars. arXiv Prepr. arXiv1604.07316
,
1
9
13
Maqueda
,
A.I.
,
Loquercio
,
A.
,
Gallego
,
G.
,
Garcia
,
N.
and
Scaramuzza
,
D.
(
2018
).
Event-based vision meets deep learning on steering prediction for self-driving cars. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 5419–5427
14
Esteva
,
A.
,
Kuprel
,
B.
,
Novoa
,
R.A.
,
Ko
,
J.
,
Swetter
,
S.M.
,
Blau
,
H.M.
, et al (
2017
)
Dermatologist-level classification of skin cancer with deep neural networks
.
Nature
542
,
115
118
15
Ciresan
,
D.C.
,
Giusti
,
A.
,
Gambardella
,
L.M.
and
Schmidhuber,
J.
(
2013
).
Mitosis detection in breast cancer histology images with deep neural networks
.
ICPR 2012 mitosis Detect. Compet.
16
Litjens
,
G.
,
Kooi
,
T.
,
Bejnordi
,
B.E.
,
Setio
,
A.A.A.
,
Ciompi
,
F.
,
Ghafoorian
,
M.
et al (
2017
)
A survey on deep learning in medical image analysis
.
Med. Image Anal.
42
,
60
88
17
Roth
,
H.R.
,
Shen
,
C.
,
Oda
,
H.
,
Oda
,
M.
,
Hayashi
,
Y.
,
Misawa
,
K.
et al (
2018
)
Deep learning and its application to medical image segmentation
.
Med. Imaging Technol.
36
,
63
71
18
Ronneberger
,
O.
,
Fischer
,
P.
and
Brox
,
T.
(
2015
).
U-net: convolutional networks for biomedical image segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
,
9351
,
234
241
19
Falk
,
T.
,
Mai
,
D.
,
Bensch
,
R.
,
Çiçek
,
Ö.
,
Abdulkadir
,
A.
,
Marrakchi
,
Y.
et al (
2019
)
U-Net: deep learning for cell counting, detection, and morphometry
.
Nat. Methods
16
,
67
20
Ounkomol
,
C.
,
Seshamani
,
S.
,
Maleckar
,
M.M.
and
Collman
,
F.
(
2018
)
Label-free prediction of three-dimensional fluorescence images from transmitted light microscopy
.
Nat. Methods
15
,
917
920
21
Christiansen
,
E.M.
,
Yang
,
S.J.
,
Ando
,
D.M.
,
Javaherian
,
A.
,
Skibinski
,
G.
,
Lipnick
,
S.
, et al (
2018
)
In silico labeling: predicting fluorescent labels in unlabeled images
.
Cell
173
,
792
803.e19
22
Weigert
,
M.
,
Schmidt
,
U.
,
Boothe
,
T.
,
Müller
,
A.
,
Dibrov
,
A.
,
Jain
,
A.
, et al (
2018
)
Content-aware image restoration: pushing the limits of fluorescence microscopy
.
Nat. Methods
15
,
1090
1097
23
Ouyang
,
W.
,
Aristov
,
A.
,
Lelek
,
M.
,
Hao
,
X.
and
Zimmer
,
C.
(
2018
)
Deep learning massively accelerates super-resolution localization microscopy
.
Nat. Biotechnol.
36
,
460
468
24
Wang
,
H.
,
Rivenson
,
Y.
,
Jin
,
Y.
,
Wei
,
Z.
and
Gao
,
R.
(
2019
)
Deep learning achieves super-resolution in fluorescence microscopy
.
Nat. Methods
16
,
103
110
25
Angermueller
,
C.
,
Pärnamaa
,
T.
,
Parts
,
L.
and
Stegle
,
O.
(
2016
)
Deep learning for computational biology
.
Mol. Syst. Biol.
12
,
1
16
26
Belthangady
,
C.
and
Royer
,
L.A.
(
2019
)
Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction
.
Nat. Methods
27
Nichols
,
J.A.
,
Herbert Chan
,
H.W.
and
Baker
,
M.A.B.
(
2019
)
Machine learning: applications of artificial intelligence to imaging and diagnosis
.
Biophys. Rev.
11
,
111
118
28
Glorot
,
X.
,
Bordes
,
A.
and
Bengio
,
Y.
(
2011
).
Deep sparse rectifier neural networks
.
Proc. 14th Int. Conf. Artif. Intell. Stat.
15
,
315
323
29
Werbos
,
P.
(
1974
).
Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Unpublished Ph.D. Dissertation, Harvard University, Department of Applied Mathematics, Ci.Nii.Ac.Jp, no
30
Bottou
,
L.
(
2010
).
Large-scale machine learning with stochastic gradient descent
.
Proc. COMPSTAT'2010
,
Physica-Verlag HD
,
177
186
31
Weigert
,
M.
,
Royer
,
L.
,
Jug
,
F.
and
Myers
,
G.
(
2017
)
Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
.
Int. Conf. Med. Image Comput. Comput. Interv.
2
,
126
134
32
Pärnamaa
,
T.
and
Parts
,
L.
(
2017
)
Accurate classification of protein subcellular localization from high-throughput microscopy images using deep learning
.
G3
7
,
1385
1392
33
Pan
,
S.J.
and
Yang
,
Q.
(
2010
)
A survey on transfer learning
.
IEEE Trans. Knowl. Data Eng.
22
,
1345
1359
34
Phan
,
H.T.H.
,
Kumar
,
A.
,
Kim
,
J.
and
Feng
,
D.
(
2016
)
Transfer learning of a convolutional neural network for HEP-2 cell image classification
.
IEEE 13th Int. Symp. Biomed. Imaging 2012
,
1208
1211
35
Segebarth
,
D.
,
Griebel
,
M.
,
Duerr
,
A.
,
von Collenberg
,
C.R.
,
Martin
,
C.
,
Fiedler
,
D.
et al. (
2018
)
DeepFLaSh, a deep learning pipeline for segmentation of fluorescent labels in microscopy images
.
bioRxiv
,
473199
36
Goodfellow
,
I.J.
,
Pouget-abadie
,
J.
,
Mirza
,
M.
,
Xu
,
B.
,
Warde-Farley
,
D.
,
Ozair
,
S.
et al (
2014
)
Generative adversarial nets
.
Adv. Neural Inf. Process. Syst.
2672
2680
37
Le
,
Q.V.
,
Ranzato
,
M.A.
,
Devin
,
M.
,
Corrado
,
G.S.
and
Ng
,
A.Y.
(
2012
).
Building high-level features using large scale unsupervised learning. arXiv Prepr. arXiv1112.6209
38
Lu
,
A.
,
Kraus
,
O.Z.
,
Cooper
,
S.
and
Moses
,
A.M.
(
2018
).
Learning unsupervised feature representations for single cell microscopy images with paired cell inpainting. bioRxiv, 395954
39
Eulenberg
,
P.
,
Köhler
,
N.
,
Blasi
,
T.
,
Filby
,
A.
,
Carpenter
,
A.E.
,
Rees
,
P.
, et al (
2017
)
Reconstructing cell cycle and disease progression using deep learning
.
Nat. Commun.
8
,
1
6
40
Cheplygina
,
V.
,
De Bruijne
,
M.
and
Pluim
,
J.P.W.
(
2019
)
Not-so-supervised : a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis
.
Med. Image Anal.
54
,
280
296
41
Krull
,
A.
,
Buchholz
,
T.-O.
and
Jug
,
F.
(
2019
)
Noise2Void – Learning Denoising from Single Noisy Images
.
Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
,
2129
2137
42
Batson
,
J.
, and
Royer
,
L.
(
2019
)
Noise2Self: blind denoising by self-supervision. arXiv Prepr. arXiv1901.11365
43
Carpenter
,
A.E.
,
Jones
,
T.R.
,
Lamprecht
,
M.R.
,
Clarke
,
C.
,
Kang
,
I.
,
Friman
,
O.
, et al (
2006
)
Cellprofiler : image analysis software for identifying and quantifying cell phenotypes
.
Genome Biol.
7
.
44
Held
,
M.
,
Schmitz
,
M.H.A.
,
Fischer
,
B.
,
Walter
,
T.
,
Neumann
,
B.
,
Olma
,
M.H.
et al (
2010
)
Cellcognition : time-resolved phenotype annotation in high-throughput live cell imaging
.
Nat. Methods
7
,
747
754
45
Sommer
,
C.
,
Straehle
,
C.
,
Ullrich
,
K.
and
Hamprecht
,
F.A.
(
2011
).
ILASTIK : Interactive Learning And Segmentation Toolkit
.
IEEE Int. Symp. Biomed. imaging From nano to macro
.
IEEE
,
230
233
46
Xu
,
Z.
and
B
,
J.H.
(
2016
)
Detecting 10 000 cells in one second
.
Int. Conf. Med. Image Comput. Comput. Interv.
676
684
47
Malon
,
C.
and
Cosatto
,
E.
(
2013
)
Classification of mitotic figures with convolutional neural networks and seeded blob features
.
J. Pathol. Inform.
4
,
9
48
Shkolyar
,
A.
,
Gefen
,
A.
,
Benayahu
,
D.
and
Greenspan
,
H.
(
2015
)
Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using convolutional neural networks
.
Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS
2015
,
743
746
49
Wang
,
D.
,
Khosla
,
A.
,
Gargeya
,
R.
,
Irshad
,
H.
and
Beck
,
A.H
. (
2016
).
Deep learning for identifying metastatic breast cancer. arXiv Prepr. arXiv1606.05718, 1–6
50
Xu
,
Y.
,
Mo
,
T.
,
Feng
,
Q.
,
Zhong
,
P.
,
Lai
,
M.
and
Chang
,
E.I.C.
(
2014
).
Deep learning of feature representation with multiple instance learning for medical image analysis
.
ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. Proc
.
1
,
1626
1630
51
Godinez
,
W.J.
,
Hossain
,
I.
,
Lazic
,
S.E.
,
Davies
,
J.W.
and
Zhang
,
X.
(
2017
)
A multi-scale convolutional neural network for phenotyping high-content cellular images
.
Bioinformatics
33
,
2010
2019
52
Kraus
,
O.Z.
,
Ba
,
J.L.
and
Frey
,
B.J.
(
2016
)
Classifying and segmenting microscopy images with deep multiple instance learning
.
Bioinformatics
32
,
52
59
53
Kraus
,
O.Z.
,
Grys
,
B.T.
,
Ba
,
J.
,
Chong
,
Y.
,
Frey
,
B.J.
,
Boone
,
C.
, et al (
2017
)
Automated analysis of high-content microscopy data with deep learning
.
Mol. Syst. Biol.
13
,
1
15
54
Dürr
,
O.
and
Sick
,
B.
(
2016
)
Single-cell phenotype classification using deep convolutional neural networks
.
J. Biomol. Screen.
21
,
998
1003
55
Richmond
,
D.
,
Jost
,
A.P.-T.
,
Lambert
,
T.
,
Waters
,
J.
and
Elliott
,
H.
(
2017
).
DeadNet: identifying phototoxicity from label-free microscopy images of cells using deep ConvNets. arXiv Prepr. arXiv1701.06109, 1–19
56
Kusumoto
,
D.
,
Lachmann
,
M.
,
Kunihiro
,
T.
,
Yuasa
,
S.
,
Kishino
,
Y.
,
Kimura
,
M.
, et al (
2018
)
Automated deep learning-Based system to identify endothelial cells derived from induced pluripotent stem cells
.
Stem Cell Rep.
10
,
1687
1695
57
Arganda-Carreras
,
I.
,
Kaynig
,
V.
,
Rueden
,
C.
,
Eliceiri
,
K.W.
,
Schindelin
,
J.
,
Cardona
,
A.
, et al (
2017
)
Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification
.
Bioinformatics
33
,
2424
2426
58
Van Valen
,
D.A.
,
Kudo
,
T.
,
Lane
,
K.M.
,
Macklin
,
D.N.
,
Quach
,
N.T.
,
DeFelice
,
M.M.
, et al (
2016
)
Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments
.
PLoS Comput. Biol.
12
,
1
24
59
Long
,
J.
,
Shelhamer
,
E.
and
Darrell
,
T.
(
2015
)
Fully convolutional networks for semantic segmentation
.
Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
,
3431
3440
60
Ciresan
,
D.C.
,
Giusti
,
A.
,
Gambardella
,
L.M.
and
Schmidhuber
,
J.
(
2012
)
Deep neural networks segment neuronal membranes in electron microscopy images
.
Nips
,
1
9
61
Ning
,
F.
,
Delhomme
,
D.
,
LeCun
,
Y.
,
Piano
,
F.
,
Bottou
,
L.
,
Barbano
,
P.E.
et al (
2005
)
Toward automatic phenotyping of developing embryos from videos
.
IEEE Trans. Image Process.
14
,
1360
1371
62
Kainz
,
P.
,
Pfeiffer
,
M.
and
Urschler
,
M.
(
2017
)
Semantic segmentation of colon glands with deep convolutional neural networks and total variation segmentation
.
PeerJ
5
,
e3874
63
Bentaieb
,
A.
and
Hamarneh
,
G.
(
2016
)
Topology aware fully convolutional networks for histology gland segmentation
.
Int. Conf. Med. Image Comput. Comput. Interv.
,
460
468
64
Chen
,
H.
,
Qi
,
X.
,
Yu
,
L.
and
Heng
,
P.-A.
(
2016
)
DCAN: deep contour-aware networks for accurate gland segmentation
.
Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
,
2487
2496
65
Li
,
W.
,
Manivannan
,
S.
,
Akbar
,
S.
,
Zhang
,
J.
,
Trucco
,
E.
and
McKenna
,
S.J.
(
2016
)
Gland segmentation in colon histology images using hand-crafted features and convolutional neural networks
.
Proc. Int. Symp. Biomed. Imaging
,
1405
1408
66
Xu
,
Y.
,
Li
,
Y.
,
Liu
,
M.
,
Wang
,
Y.
,
Lai
,
M.
and
Chang
,
E.I.C.
(
2016
).
Gland instance segmentation by deep multichannel side supervision. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics),
9901
LNCS
,
496
504
67
Xu
,
J.
,
Luo
,
X.
,
Wang
,
G.
,
Gilmore
,
H.
,
Madabhushi
,
A.
and
Reserve
,
C.W.
(
2017
)
A deep convolutional neural network for segmenting and classifying epithelial regions in histopathological images
.
Neurocomputing
191
,
214
223
68
Litjens
,
G.
,
Sánchez
,
C.I.
,
Timofeeva
,
N.
,
Hermsen
,
M.
,
Nagtegaal
,
I.
,
Kovacs
,
I.
, et al (
2016
)
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
.
Sci. Rep.
6
,
1
11
69
Naylor
,
P.
,
Laé
,
M.
,
Reyal
,
F.
and
Walter
,
T.
(
2019
)
Segmentation of nuclei in histopathology images by deep regression of the distance Map
.
IEEE Trans. Med. Imaging
38
,
448
459
70
Chen
,
C.L.
,
Mahjoubfar
,
A.
,
Tai
,
L.
,
Blaby
,
I.K.
and
Huang
,
A.
(
2016
)
Deep learning in label-free cell classification
.
Sci. Rep.
6
,
1
16
71
Chen
,
J.
,
Yang
,
L.
,
Zhang
,
Y.
,
Alber
,
M.
and
Chen
,
D.Z.
(
2016
)
Combining fully convolutional and recurrent neural networks for 3D biomedical image segmentation
.
Adv. Neural Inf. Process. Syst.
,
3036
3044
72
Dixit
,
R.
and
Cyr
,
R.
(
2003
)
Cell damage and reactive oxygen species production induced by fluorescence microscopy: effect on mitosis and guidelines for non-invasive fluorescence microscopy
.
Plant J.
36
,
280
290
73
Hoebe
,
R.A.
,
Van Oven
,
C.H.
,
Gadella
,
T.W.J.
,
Dhonukshe
,
P.B.
,
Van Noorden
,
C.J.F.
and
Manders
,
E.M.M.
(
2007
)
Controlled light-exposure microscopy reduces photobleaching and phototoxicity in fluorescence live-cell imaging
.
Nat. Biotechnol.
25
,
249
253
74
Wichmann
,
J.
and
Hell
,
S.W.
(
1994
)
Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy
.
Opt. Lett.
19
,
780
782
75
Betzig
,
E.
,
Patterson
,
G.H.
,
Sougrat
,
R.
,
Lindwasser
,
O.W.
,
Olenych
,
S.
,
Bonifacino
,
J.S.
, et al (
2006
)
Imaging intracellular fluorescent proteins at nanometer resolution
.
Science
313
,
1642
1645
76
Rust
,
M.J.
,
Bates
,
M.
and
Zhuang
,
X.
(
2006
)
Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)
.
Nat. Methods
3
,
793
795
77
Gustafsson
,
M.G.L.
(
2005
)
Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution
.
Proc. Natl Acad. Sci. U. S. A.
102
,
13081
6
78
Gustafsson
,
N.
,
Culley
,
S.
,
Ashdown
,
G.
,
Owen
,
D.M.
,
Pereira
,
P.M.
and
Henriques
,
R.
(
2016
)
Fast live-cell conventional fluorophore nanoscopy with imageJ through super-resolution radial fluctuations
.
Nat. Commun.
7
,
12471
79
Culley
,
S.
,
Tosheva
,
K.L.
,
Matos Pereira
,
P.
and
Henriques
,
R.
(
2018
)
SRRF: universal live-cell super-resolution microscopy
.
Int. J. Biochem. Cell Biol.
101
,
74
79
80
Laine
,
R.F.
,
Tosheva
,
K.L.
,
Gustafsson
,
N.
,
Gray
,
R.D.M.
,
Almada
,
P.
,
Albrecht
,
D.
et al (
2019
)
Nanoj: a high-performance open-source super-resolution microscopy toolbox
.
J. Phys. D. Appl. Phys
81
Nehme
,
E.
,
Weiss
,
L.E.
,
Mchaeli
,
T.
and
Shechtman
,
Y.
(
2018
)
Deep-STORM: super-resolution single-molecule microscopy by deep learning
.
Optica
5
,
458
464
82
Boyd
,
N.
,
Jonas
,
E.
,
Babcock
,
H.
and
Recht
,
B.
(
2018
)
Deeploco : fast 3D localization microscopy using neural networks
.
bioRxiv
267096
83
Hess
,
S.T.
and
Nelson
,
A.J.
(
2018
)
Molecular imaging with neural training of identification algorithm (neural network localization identification)
.
Microsc. Res. Tech.
81
,
966
972
84
Li
,
Y.
,
Xu
,
F.
,
Zhang
,
F.
,
Xu
,
P.
,
Zhang
,
M.
,
Fan
,
M.
et al (
2018
)
DLBI : deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy
.
Bioinformatics
34
,
284
294
85
Sage
,
D.
,
Pham
,
T.-A.
,
Babcock
,
H.
,
Lukes
,
T.
,
Pengo
,
T.
,
Chao
,
J.
, et al (
2019
)
Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software
.
Nat. Methods
16
,
387
395
86
Isola
,
P.
,
Zhu
,
J.-Y.
,
Zhou
,
T.
and
Efros
,
A.A.
(
2017
)
Image-to-Image translation with conditional adversarial networks
.
Proc. IEEE Conf. Comput. Vis. pattern Recognit.
,
1125
1134
87
Azulay
,
A.
and
Weiss
,
Y.
(
2018
)
Why do deep convolutional networks generalize so poorly to small image transformations? arXiv Prepr. arXiv1805.12177
88
Culley
,
S.
,
Albrecht
,
D.
,
Jacobs
,
C.
,
Pereira
,
P.M.
,
Leterrier
,
C.
,
Mercer
,
J.
, et al (
2018
)
Quantitative mapping and minimization of super-resolution optical imaging artifacts
.
Nat. Methods
15
,
263
266
89
Lehman
,
J.
,
Clune
,
J.
,
Misevic
,
D.
,
Adami
,
C.
,
Beaulieu
,
J.
,
Bentley
,
P.J.
et al. (
2018
)
The surprising creativity of digital evolution: a collection of anecdotes from the evolutionary computation and artificial life research communities. arXiv Prepr. arXiv1803.03453
90
Lotter
,
W.
,
Kreiman
,
G.
and
Cox
,
D.
(
2017
)
Deep predictive coding networks for video prediction and unsupervised learning. arXiv Prepr. arXiv1605.08104, 1–18
91
Fisch
,
D.H.
,
Yakimovich
,
A.
,
Clough
,
B.
,
Wright
,
J.
,
Bunyan
,
M.
,
Howell
,
M.
et al. (
2018
).
An artificial intelligence workflow for defining host-pathogen interactions
.
eLife
8
,
e40560
92
Almada
,
P.
,
Pereira
,
P.M.
,
Culley
,
S.
,
Caillol
,
G.
,
Boroni-Rueda
,
F.
,
Dix
,
C.L.
, et al (
2019
)
Automating multimodal microscopy with nanoJ-Fluidics
.
Nat. Commun.
10
,
1
9
This is an open access article published by Portland Press Limited on behalf of the Biochemical Society and distributed under the Creative Commons Attribution License 4.0 (CC BY).