Topic: distorted images within GCPS optimization window
I've been running GCPS Optimization with NAPP aerial photos (scanned at 14 microns) as the slave image and a shaded relief from LiDAR (gridded to 0.5 meters) as the master image.
During optimization, I've noticed several of the tiles for my GCP seem to be distorted from the original image (this occurs in both the statistical and frequency options). This distortion has varying degrees. Here are some examples:
1. A rather bad case:
and a view from the original images:
2. A mildly distorted image
and the original:
3. A non-distorted case:
and the originals:
In case it makes any differences, here's the hillshade I'm using as my master image:
It's LiDAR coverage over my main area of interest, with a 10 m DEM (from USGS seamless, resampled to match the resolution of the LiDAR) covering the areas where there's no LiDAR coverage.
For the time being, I've been removing the badly distorted tie points from my GCPS file, and re-optimizing with only the mildly distorted and un-distorted points. This gives marginally better results when looking at the "convergence quality report" in the GCPS optimization report.
I was wondering if anyone else has experienced similar problems and know why they arise? Is simply removing the badly distorted GCPs and moving on the best strategy? Or is there something more here I should be concerned about?
I have used my pruned set of GCPS to go on and make a resampled image, and these images appear to match well to the LiDAR hillshade, so I'm assuming this is OK, but thought I should check here before I start processing tons of images.
Also, a bit unrelated, I've noticed when using the frequency correlator my convergence often increases (sometimes by a factor of 10 or more) during my iterations. Is this normal?