- By Anatasiia Kornilova , Mikhail Salnikov, Olga Novitskaya, Maria Begicheva, Egor Sevriugov, Kirill Shcherbakov
Abstract
In the last decade, a huge step was done in the
field of mobile microscopes development as well
as in the field of mobile microscopy application
to real-life disease diagnostics and a lot of other
important areas (air/water quality pollution, education, agriculture). In the current study, we applied
image processing techniques from Deep Learning (in-focus/out-of-focus classification, image
deblurring and denoising, multi-focus image fusion) to the data obtained from the mobile microscope. Overview of significant works for every the task is presented, the most suitable approaches
were highlighted.
With the development of optical microscopy technologies,
the cost of simple microscopes has become low enough for
their mass usage. A considerable role in that class plays mobile microscopy – the field where smartphone camera and
computational resources can be used with universal optical
microscopes for fast diagnostics in different areas: disease
diagnosis (malaria, tuberculosis, some types of cancer in developing countries), on-home diagnostics, agriculture analysis, water and ocean quality and pollution analysis, and education.
In the past 10 years, a lot of solutions were proposed in
the field of mobile microscopy — special lenses, devices
and classical-look optical microscopes with cheaper optics. With wide-spread smartphones is good enough to differ important specimen parts
in from the mobile microscope. The majority of the works
was devoted to the mechanical parts of microscopes to increase the total quality of the image, nonetheless, there are
a lot of digital processing techniques, especially in Deep
Learning, that allows increasing the quality of biomedical
microscopic images, like filter in-focus/out-of-focus images,
focus-stacking, super-resolution and deblurring, stitching,
etc.
The main features of mobile microscopes are bright-field
mode (no fluorescence modality), lower quality of images
in comparison with professional microscopes, artefacts in
optical system (dust, water drops, condensate). All that
things don’t allow to apply existing algorithms and pre-trained models directly to the mobile microscopy data.
The main techniques that was chosen for the work are following
- In-focus/out-of-focus images classification.
Despite
auto-focusing systems in modern microscopes, obtained images are still should be filtered out to remove
blurred images with artefacts, to find planes of one
specimen with different focused areas.
- Fast scanning image deblurring
Obtaining of the
high-quality images requires much more time while
scanning. The idea is to reuse Deep Learning deblurring techniques for fast scanning when we can reconstruct the original image from one obtained from the fast
movement.
- Focus-stacking (Multi-Focus images fusion).
Because of optical microscopes usually have shallow
depth of focus, volume specimen can not be studied
from one focal length. To understand the specimen
structure specialist should study different focal planes
of the specimen where different areas in focus.
Main Contributions
- CNN model for in-focus/out-of-focus classification stable for specific mobile microscopy artefacts was proposed, comparison with other developed solutions in that field was done.
- There was obtained deblurred images with U-Net, SRCNN and DeblurGAN. The last produced sharper pictures, which is seen in visual comparison with results of baseline models. DeblurGAN results was improved playing with training dataset and removing artefacts with upgrading the standard model.
- FuseGAN model for combining focus parts from several images with a high level of details and smooth transitions for specific mobile microscopy data was proposed, comparison with existed models for this task was done.
Conclusion
In our work we considered different networks proposed in
the microscopy field for image quality assessment, found
out the requirements for our task and chose the most suitable
one. The approach was improved and adopted for our data.
Several hypothesis with false positives on dust were tested,
accuracy more than 95% on test images was achieved.
Also, there were tested various models for obtaining sharp
denoised images: U-Net, SRCNN, DeblurGAN, Denoising
Autoenoder, Stacked Denoising Autoencoder, Deep Coupled Autoencoder. According to the results, DeblurGAN
generates sharper images than other methods. Adding the
synthetically blurred images in training dataset and increasing the number of training pictures improves the quality
of output testing images. We resolved the problem of the
existence of some artefacts in output images, which affects
negatively on the computing of metrics. However, standard
PSNR and SSIM metrics fail to provide fine comparison of
images.
Comments