Unsupervised Learning for Deformable Registration

The authors highlight the multiple shortcomings of the contemporary learning based image registration methods, such as the inaccuracy of the correspondences provided for training (especially when the deformed subject image is significantly different from the template image), the difficulty of incorporating new image features for learning without repeating the whole training procedure all over again, and the lack of variation in the training image features primarily because of the prohibitive computational cost associated with it. Moreover, the authors note that the best features are ``often learnt only at the template space", meaning if the template image is changed, the whole training procedure has to be re-done. ...

November 14, 2018 · 3 min · Kumar Abhishek

Achieving Dermatologist-level Classification Performance of Skin Lesion Images

The Dataset The paper uses a new dermatologist-labelled dataset of 129,450 clinical images, which also includes 3,374 dermoscopic images. These images come from 18 different clinician-curated, open-access online repositories, as well as from clinical data from Stanford University Medical Center, and belong to 2,032 diseases. This data is split into 127,463 training and validation images, and 1,942 biopsy-labelled test images. ...

November 7, 2018 · 3 min · Kumar Abhishek

Feature Representation and Multi-modal Fusion using Deep Boltzmann Machine

This paper proposes a high level latent and shared feature representation from neuroimaging modalities (MRI and PET) via deep learning for the diagnosis of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI). In contrast to the previous works where the multimodal features were combined by concatenating into long vectors or transforming into a high dimensional kernel space, the authors propose using a Deep Boltzmann Machine (DBM) to find a latent hierarchical representation from a 3D patch, and then come up with a method for “a joint feature representation from the paired patches of MRI and PET with a multimodal DBM.” ...

November 7, 2018 · 3 min · Kumar Abhishek

V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

This paper proposes an end-to-end trained fully convolutional neural network model to process 3D image volumes. Unlike previous works that processed the input volumes slice-wise or patch-wise, the authors propose to use volumetric convolutions. Moreover, a new objective function formulated using the Dice coefficient is proposed to be optimized, and the authors demonstrate the fast and superior performance of the algorithm on the segmentation of prostate MRI volumes. ...

November 7, 2018 · 3 min · Kumar Abhishek

Matching with Shape Contexts

Given two shapes, $N$ samples are drawn from the edge elements of the shape. There are no specific constraints on these points - they can be either on the internal or the external contour of the object. Moreover, they also need not correspond to keypoints for the shape (such as maxima of curvature, inflection points, etc.), and although desired that the samples be uniform in spacing, this too is not a rigid criterion. ...

October 31, 2018 · 3 min · Kumar Abhishek