|PROJECTS AT VISL FINISHED IN 2002|
The area of Digital Terrain Model matching has been neglected for the last decade. Not many have tried to deal with this interesting and practical problem. The project is based on the work of J.J. Rodriguez and J.K. Aggarwal in 1990. We strongly recommend reading this work in order to have further background of this project. Moreover, this project improves the said work and deals with more realistic problems and questions aroused by it. The project contains solutions found empirically and thus opens a way for further study in improvements of the model and automation of the methods used.
The previous work done lacks more realistic features:
The following algorithm will deal with these issues.
In order to achieve rotation and translation matches we require an invariant features to be present in both images. The invariant features we chose are the curvature vs. arch length of the images zero set.
We retrieve the zero set curvature thus:
First we pass the image through a Laplacian of Gaussian (LoG) filter. As the name implies it is simply a Gaussian filter through a Laplacian operator. This gives us a cliff face edge detector that uses an adaptive threshold (retrieved via a Canny filter) to extrapolate the key zero set of the image.
We interpolate the lines so that each directional move represents approximately an equal distance.
Retrieving the curvature vs. arch length requires us to pass the interpolated chain code through a derivative of Guassian (DoG) filter. This gives us a smoothed curvature vs. Arch length. We are only interested in the key feature on the curvature versus arch length to make our comparison test later. Thus, we retrieve from the graph its local extrema points since they represent the key feature on the graph.
After receiving two sets of point vectors we choose preliminary matching possibilities for each point by examining the point's curvature area on one image and comparing it to all the other curvature Areas around the second image's extrema points.
Now we have a series of points of one image and possible matches to the points on a different image. Thus, the problem is minimized so that the following Random Sample Consensus (RANSAC) algorithm can retrieve a rotation and translation that transform one set of point to the other, which is the same rotation and translation the moves one image to the other.
RANSAC algorithm works as following:
It selects 4 random extrema points of one image and compares it to the matching point of the second image.
The comparison is done using a function that gives a closed form solution of absolute orientation using orthonormal matrices.
The orientation that is given is tested on all the extrema points. The distance between the tested points to the other points is calculated .The number of distances that are under an empirical threshold are counted and saved this is done L iteration.
The orientation that gives use the largest number of matches under the threshold is selected from the L iteration and assumed to be the correct orientation.