Papers
Topics
Authors
Recent
Search
2000 character limit reached

Local Area Transform for Cross-Modality Correspondence Matching and Deep Scene Recognition

Published 3 Jan 2019 in cs.CV | (1901.00927v1)

Abstract: Establishing correspondences is a fundamental task in variety of image processing and computer vision applications. In particular, finding the correspondences between a non-linearly deformed image pair induced by different modality conditions is a challenging problem. This paper describes a efficient but powerful image transform called local area transform (LAT) for modality-robust correspondence estimation. Specifically, LAT transforms an image from the intensity domain to the local area domain, which is invariant under nonlinear intensity deformations, especially radiometric, photometric, and spectral deformations. In addition, robust feature descriptors are reformulated with LAT for several practical applications. Furthermore, LAT-convolution layer and Aception block are proposed and, with these novel components, deep neural network called LAT-Net is proposed especially for scene recognition task. Experimental results show that LATransformed images provide a consistency for nonlinearly deformed images, even under random intensity deformations. LAT reduces the mean absolute difference as compared to conventional methods. Furthermore, the reformulation of descriptors with LAT shows superiority to conventional methods, which is a promising result for the tasks of cross-spectral and modality correspondence matching. the local area can be considered as an alternative domain to the intensity domain to achieve robust correspondence matching, image recognition, and a lot of applications: such as feature matching, stereo matching, dense correspondence matching, image recognition, and image retrieval.

Citations (1)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.