Quantcast
Viewing all articles
Browse latest Browse all 21338

This Experiment On ‘Minimalist Machine Learning Algorithms’ Can Analyse Images From Very Little Data

Image may be NSFW.
Clik here to view.

Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new approach to machine learning aimed at experimental imaging data. Unlike conventional methods where the machine learning models depend on tens or thousands of images, this new approach requires fewer images and can learn much more quickly.  

The experiment

The research headed by Daniël Pelt and James Sethian Berkeley Lab’s Center for Advanced Mathematics for Energy Research Applications (CAMERA) developed a “Mixed-Scale Dense Convolution Neural Network (MS-D)” that requires far fewer elements than traditional methods. It facilitates a better learning process, given the small training set.

It produces high resolution images at higher speed, which can otherwise be a painstaking.  “In many scientific applications, tremendous manual labor is required to annotate and tag images and it can take weeks to produce a handful of carefully delineated images,” said Sethian, who is also a mathematics professor at the University of California, Berkeley. “Our goal was to develop a technique that learns from a very small data set.”

The details which was published in the paper titled ‘A mixed-scale dense convolutional neural network for image analysis’ in the Proceedings of the National Academy of Sciences in Dec 2017, researchers have given a detailed working of the algorithm. They explored the fact that usual downscaling and upscaling that is done with the images could be replaced by mathematical convulsions handling multiple scales within a single layer.

The research, which is supported by the offices of Advanced Scientific Computing Research and Basic Energy Sciences in the Department of Energy’s Office of Science, is currently focused on understanding the cell structure and morphology influences. It can help in understanding disease and characterising all cells in a healthy human body

Where less is more!

With images being rampant everywhere, diving into the large database of images could be a difficult task. Using convolutional neural networks and other machine learning methods have revolutionised the ability to quickly identify natural images that look like ones previously seen and catalogued. While these have been earlier analysed and guided by millions of tags and huge computing time, new algorithm can reduce it drastically.

One of the important aspects is that using CAMERA, which uses machine learning from very limited amounts of data, they could reduce the number of parameters.

Machine learning usually relies on deep convolutional neural networks for image related problems, in which input and intermediate images are convulsed in large number of successive layers, allowing the network to learn highly nonlinear features. They rely on downscaling and upscaling operations to capture features at various image scales.

To get deeper insights, additional layer types and connections are often required. “Finally, DCNNs typically use a large number of intermediate images and trainable parameters, often more than 100 million, to achieve results for difficult problems”, quoted the website. The new algorithm achieves accurate results with few intermediate images and parameters, eliminating both the need to tune hyperparameters and additional layers or connections to enable training, it added.

Not just producing great result with lesser parameters, it also is capable of producing high resolution images from low resolution input. A small set of training images processed with a Mixed-Scale Dense network can denoise the image. The research paper quoted that the images were constructed using 1,024 acquired X-ray projections to obtain images with relatively low amounts of noise.

“Noisy images of the same object were then obtained by reconstructing using 128 projections. Training inputs were noisy images, with corresponding noiseless images used as target output during training. The trained network was then able to effectively take noisy input data and reconstruct higher resolution images”, researchers said.

Theory and algorithm- how it works

Generally, in a Convolutional Neural Network (CNN) each successive network layer receives output from preceding network layers. The output is usually in the form of a image also called as feature maps mathematically represented as

zi ε Rmi x ni x ci  

where,

  • zi is output categorised as 3-dimensional due to multi layered networks
  • R defines the set of real values on image pixels m, n, c
  • m – rows
  • n – columns
  • c – channels

In order to capitalise on lesser data, DCNNs for image analysis use two operations

  1. Mixed scaling (downscaling and upscaling) of convolutions: Each feature map operate at a different varying scale
  2. Dense connections: Instead of relying on output of one previous layer in a feature map, the output of all layers.

By clubbing these two operations, a mixed-scale dense DCNN is obtained, which the researchers did. This network is similar to other DCNNs except a Rectified Linear unit (ReLu) activation function is used. This makes it advantageous with regard to image accuracy and trainable learning parameters.

Image may be NSFW.
Clik here to view.

How can it be availed?

The researchers are popularising the algorithm through a web portal titled Segmenting Labeled Image Data Engine (SlideCAM)” as part of the CAMERA suite of tools for DOE experimental facilities. It is led by Olivia Jain and Simon Mo.

Real world applications

In the current scenario, researchers are trying to implement it in biological avenues such as biological reconstruction and brain mapping. “These new approaches are really exciting, since they will enable the application of machine learning to a much greater variety of imaging problems than currently possible,” Pelt said.

“By reducing the amount of required training images and increasing the size of images that can be processed, the new architecture can be used to answer important questions in many research fields”, he said.

They are poised to provide new computational tool to analyse data across a wide range of research areas in the coming days.

The post This Experiment On ‘Minimalist Machine Learning Algorithms’ Can Analyse Images From Very Little Data appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 21338

Trending Articles