欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

深度学习图像配准 Image Registration: From SIFT to Deep Learning

最编程 2024-08-14 14:31:29
...

Deep Learning Approaches

Most research nowadays in image registration concerns the use of deep learning. In the past few years, deep learning has allowed for state-of-the-art performance in Computer Vision tasks such as image classification, object detection, and segmentation. There is no reason why this couldn’t be the case for Image Registration.

Feature Extraction

The first way deep learning was used for image registration was for feature extraction. Convolutional neural networks’ successive layers manage to capture increasingly complex image characteristics and learn task-specific features. Since 2014, researchers have applied these networks to the feature extraction step rather than SIFT or similar algorithms.

  • In 2014, Dosovitskiy et al. proposed a generic feature learning methodto train a convolutional neural network using only unlabeled data. The genericity of these features enabled them to be robust to transformations. These features, or descriptors, outperformed SIFT descriptors for matching tasks.
  • In 2018, Yang et al. developed a non-rigid registration method based on the same idea. They used layers of a pre-trained VGG network to generate a feature descriptor that keeps both convolutional information and localization capabilities. These descriptors also seem to outperform SIFT-like detectors, particularly in cases where SIFT contains many outliers or cannot match a sufficient number of feature points.

The code for this last paper can be found here. While we were able to test this registration method on our own images within 15 minutes, the algorithm is approximatively 70 times slower than the SIFT-like methods implemented earlier in this article.

Homography Learning

Instead of limiting the use of deep learning to feature extraction, researchers tried to use a neural network to directly learn the geometric transformation to align two images.

Supervised Learning
In 2016, DeTone et al. published Deep Image Homography Estimation that describes Regression HomographyNet, a VGG style model that learns the homography relating two images. This algorithm presents the advantage of learning the homography and the CNN model parameters simultaneously in an end-to-end fashion: no need for the previous two-stage process!

The network produces eight real-valued numbers as an output. It is trained in a supervised fashion thanks to a Euclidean loss between the output and the ground-truth homography.

Like any supervised approach, this homography estimation method requires labeled pairs of data. While it is easy to obtain the ground truth homographies for artificial image pairs, it is much more expensive to do so on real data.

Unsupervised Learning
With this in mind, Nguyen et al. presented an unsupervised approach to deep image homography estimation. They kept the same CNN but had to use a new loss function adapted to the unsupervised approach: they chose the photometric loss that does not require a ground-truth label. Instead, it computes the similarity between the reference image and the sensed transformed image.

Their approach introduces two new network structures: a Tensor Direct Linear Transform and a Spatial Transformation Layer. We will not go into the details of these components here, we can simply consider that these are used to obtain a transformed sensed image using the homography parameter outputs of the CNN model, that we then use to compute the photometric loss.

The authors claim that this unsupervised method obtains comparable or better accuracy and robustness to illumination variation than traditional feature-based methods, with faster inference speed. In addition, it has superior adaptability and performance compared to the supervised method.

Other Approaches

Reinforcement Learning
Deep reinforcement learning is gaining traction as a registration method for medical applications. As opposed to a pre-defined optimization algorithm, in this approach, we use a trained agent to perform the registration.

  • In 2016, Liao et al. were the first to use reinforcement learning for image registration. Their method is based on a greedy supervised algorithm for end-to-end training. Its goal is to align the images by finding the best sequence of motion actions. This approach outperformed several state-of-the-art methods but it was only used for rigid transformations.
  • Reinforcement Learning has also been used for more complex transformations. In Robust non-rigid registration through agent-based action learning, Krebs et al. apply an artificial agent to optimize the parameters of a deformation model. This method was evaluated on inter-subject registration of prostate MRI images and showed promising results in 2-D and 3-D.

Complex Transformations
A significant proportion of current research in image registration concerns the field of medical imagery. Often times, the transformation between two medical images cannot simply be described by a homography matrix because of the local deformations of the subject (due to breathing, anatomical changes, etc.). More complex transformations models are necessary, such as diffeomorphisms that can be represented by displacement vector fields.

Researchers have tried to use neural networks to estimate these large deformation models that have many parameters.

  • A first example is Krebs et al.’s Reinforcement Learning method mentioned just above.
  • In 2017 De Vos et al. proposed the DIRNet. It is a network that used a CNN to predict a grid of control points that are used to generate the displacement vector field to warp the sensed image according to the reference image.
  • Quicksilver registration tackles a similar problem. Quicksilver uses a deep encoder-decoder network to predict patch-wise deformationsdirectly on image appearance.