Skip to main content

MMDF: Mobile Microscopy Deep Framework

- By Anatasiia Kornilova , Mikhail Salnikov, Olga Novitskaya, Maria Begicheva, Egor Sevriugov, Kirill Shcherbakov 



Abstract 
In the last decade, a huge step was done in the field of mobile microscopes development as well as in the field of mobile microscopy application to real-life disease diagnostics and a lot of other important areas (air/water quality pollution, education, agriculture). In the current study, we applied image processing techniques from Deep Learning (in-focus/out-of-focus classification, image deblurring and denoising, multi-focus image fusion) to the data obtained from the mobile microscope. Overview of significant works for every the task is presented, the most suitable approaches were highlighted.








With the development of optical microscopy technologies, the cost of simple microscopes has become low enough for their mass usage. A considerable role in that class plays mobile microscopy – the field where smartphone camera and computational resources can be used with universal optical microscopes for fast diagnostics in different areas: disease diagnosis (malaria, tuberculosis, some types of cancer in developing countries), on-home diagnostics, agriculture analysis, water and ocean quality and pollution analysis, and education. 

In the past 10 years, a lot of solutions were proposed in the field of mobile microscopy — special lenses, devices and classical-look optical microscopes with cheaper optics. With wide-spread smartphones is good enough to differ important specimen parts in from the mobile microscope. The majority of the works was devoted to the mechanical parts of microscopes to increase the total quality of the image, nonetheless, there are a lot of digital processing techniques, especially in Deep Learning, that allows increasing the quality of biomedical microscopic images, like filter in-focus/out-of-focus images, focus-stacking, super-resolution and deblurring, stitching, etc.

The main features of mobile microscopes are bright-field mode (no fluorescence modality), lower quality of images in comparison with professional microscopes, artefacts in optical system (dust, water drops, condensate). All that things don’t allow to apply existing algorithms and pre-trained models directly to the mobile microscopy data.

The main techniques that was chosen for the work are following

  • In-focus/out-of-focus images classification. 
Despite auto-focusing systems in modern microscopes, obtained images are still should be filtered out to remove blurred images with artefacts, to find planes of one specimen with different focused areas.

  • Fast scanning image deblurring
Obtaining of the high-quality images requires much more time while scanning. The idea is to reuse Deep Learning deblurring techniques for fast scanning when we can reconstruct the original image from one obtained from the fast movement.

  • Focus-stacking (Multi-Focus images fusion). 
Because of optical microscopes usually have shallow depth of focus, volume specimen can not be studied from one focal length. To understand the specimen structure specialist should study different focal planes of the specimen where different areas in focus.

Main Contributions

  • CNN model for in-focus/out-of-focus classification stable for specific mobile microscopy artefacts was proposed, comparison with other developed solutions in that field was done.
  •  There was obtained deblurred images with U-Net, SRCNN and DeblurGAN. The last produced sharper pictures, which is seen in visual comparison with results of baseline models. DeblurGAN results was improved playing with training dataset and removing artefacts with upgrading the standard model.
  • FuseGAN model for combining focus parts from several images with a high level of details and smooth transitions for specific mobile microscopy data was proposed, comparison with existed models for this task was done.
Conclusion

In our work we considered different networks proposed in the microscopy field for image quality assessment, found out the requirements for our task and chose the most suitable one. The approach was improved and adopted for our data. Several hypothesis with false positives on dust were tested, accuracy more than 95% on test images was achieved. 

Also, there were tested various models for obtaining sharp denoised images: U-Net, SRCNN, DeblurGAN, Denoising Autoenoder, Stacked Denoising Autoencoder, Deep Coupled Autoencoder. According to the results, DeblurGAN generates sharper images than other methods. Adding the synthetically blurred images in training dataset and increasing the number of training pictures improves the quality of output testing images. We resolved the problem of the existence of some artefacts in output images, which affects negatively on the computing of metrics. However, standard PSNR and SSIM metrics fail to provide fine comparison of images.

Comments

Popular posts from this blog

TableSense: Spreadsheet Table Detection with Convolutional Neural Networks

 - By Haoyu Dong, Shijie Liu, Shi Han, Zhouyu Fu, Dongmei Zhang Microsoft Research, Beijing 100080, China. Beihang University, Beijing 100191, China Paper Link Abstract Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for...

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based sim...

Bike sharing Dynamic Re-positioning

-By Xinghua Zheng1, Ming Tang1, Hankz Hankui Zhuo1*, Kevin X. Wen Paper Link Abstract Bike Sharing Systems (BSSs) have been adopted in many major cities of the world due to traffic congestion and carbon emissions. Although there have been approaches to exploiting either bike trailers via crowdsourcing or carrier vehicles to reposition bikes in the “right” stations in the “right” time, they do not jointly consider the usage of both bike trailers and carrier vehicles. In this paper, we aim to take advantage of both bike trailers and carrier vehicles to reduce the loss of demand with regard to the crowdsourcing of bike trailers and the fuel cost of carrier vehicles. In the experiment, we exhibit that our approach outperforms baselines in several datasets from bike sharing companies. Bike-sharing systems (BSSs) typically have a set of base stations that are strategically placed throughout a city and each station has a fixed number of docks, e.g., Capital Bike-share, ...