Image stitching

(Redirected from Stitched image)

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results,[1][2] although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap.[3][4] Some digital cameras can stitch their photos internally.

Two images stitched together. The photo on the right is distorted slightly so that it matches up with the one on the left.

Applications

edit

Image stitching is widely used in modern applications, such as the following:

Alcatraz Island, shown in a panorama created by image stitching

Process

edit

The image stitching process can be divided into three main components: image registration, calibration, and blending.

Image stitching algorithms

edit
 
This sample image shows geometrical registration and stitching lines in panorama creation.

In order to estimate image alignment, algorithms are needed to determine the appropriate mathematical model relating pixel coordinates in one image to pixel coordinates in another. Algorithms that combine direct pixel-to-pixel comparisons with gradient descent (and other optimization techniques) can be used to estimate these parameters.

Distinctive features can be found in each image and then efficiently matched to rapidly establish correspondences between pairs of images. When multiple images exist in a panorama, techniques have been developed to compute a globally consistent set of alignments and to efficiently discover which images overlap one another.

A final compositing surface onto which to warp or projectively transform and place all of the aligned images is needed, as are algorithms to seamlessly blend the overlapping images, even in the presence of parallax, lens distortion, scene motion, and exposure differences.

Image stitching issues

edit

Since the illumination in two views cannot be guaranteed to be identical, stitching two images could create a visible seam. Other reasons for seams could be the background changing between two images for the same continuous foreground. Other major issues to deal with are the presence of parallax, lens distortion, scene motion, and exposure differences. In a non-ideal real-life case, the intensity varies across the whole scene, and so does the contrast and intensity across frames. Additionally, the aspect ratio of a panorama image needs to be taken into account to create a visually pleasing composite.

For panoramic stitching, the ideal set of images will have a reasonable amount of overlap (at least 15–30%) to overcome lens distortion and have enough detectable features. The set of images will have consistent exposure between frames to minimize the probability of seams occurring.

Keypoint detection

edit

Feature detection is necessary to automatically find correspondences between images. Robust correspondences are required in order to estimate the necessary transformation to align an image with the image it is being composited on. Corners, blobs, Harris corners, and differences of Gaussians of Harris corners are good features since they are repeatable and distinct.

One of the first operators for interest point detection was developed by Hans P. Moravec in 1977 for his research involving the automatic navigation of a robot through a clustered environment. Moravec also defined the concept of "points of interest" in an image and concluded these interest points could be used to find matching regions in different images. The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity variations in all directions. This often is the case at corners. However, Moravec was not specifically interested in finding corners, just distinct regions in an image that could be used to register consecutive image frames.

Harris and Stephens improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly. They needed it as a processing step to build interpretations of a robot's environment based on image sequences. Like Moravec, they needed a method to match corresponding points in consecutive image frames, but were interested in tracking both corners and edges between frames.

SIFT and SURF are recent key-point or interest point detector algorithms but a point to note is that SURF is patented and its commercial usage restricted. Once a feature has been detected, a descriptor method like SIFT descriptor can be applied to later match them.

Registration

edit

Image registration involves matching features[7] in a set of images or using direct alignment methods to search for image alignments that minimize the sum of absolute differences between overlapping pixels.[8] When using direct alignment methods one might first calibrate one's images to get better results. Additionally, users may input a rough model of the panorama to help the feature matching stage, so that e.g. only neighboring images are searched for matching features. Since there are smaller group of features for matching, the result of the search is more accurate and execution of the comparison is faster.

To estimate a robust model from the data, a common method used is known as RANSAC. The name RANSAC is an abbreviation for "RANdom SAmple Consensus". It is an iterative method for robust parameter estimation to fit mathematical models from sets of observed data points which may contain outliers. The algorithm is non-deterministic in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are performed. It being a probabilistic method means that different results will be obtained for every time the algorithm is run.

The RANSAC algorithm has found many applications in computer vision, including the simultaneous solving of the correspondence problem and the estimation of the fundamental matrix related to a pair of stereo cameras. The basic assumption of the method is that the data consists of "inliers", i.e., data whose distribution can be explained by some mathematical model, and "outliers" which are data that do not fit the model. Outliers are considered points which come from noise, erroneous measurements, or simply incorrect data.

For the problem of homography estimation, RANSAC works by trying to fit several models using some of the point pairs and then checking if the models were able to relate most of the points. The best model – the homography, which produces the highest number of correct matches – is then chosen as the answer for the problem; thus, if the ratio of number of outliers to data points is very low, the RANSAC outputs a decent model fitting the data.

Calibration

edit

Image calibration aims to minimize differences between an ideal lens models and the camera-lens combination that was used, optical defects such as distortions, exposure differences between images, vignetting,[9] camera response and chromatic aberrations. If feature detection methods were used to register images and absolute positions of the features were recorded and saved, stitching software may use the data for geometric optimization of the images in addition to placing the images on the panosphere. Panotools and its various derivative programs use this method.

Alignment

edit

Alignment may be necessary to transform an image to match the view point of the image it is being composited with. Alignment, in simple terms, is a change in the coordinates system so that it adopts a new coordinate system which outputs image matching the required viewpoint. The types of transformations an image may go through are pure translation, pure rotation, a similarity transform which includes translation, rotation and scaling of the image which needs to be transformed, Affine or projective transform.

Projective transformation is the farthest an image can transform (in the set of two dimensional planar transformations), where only visible features that are preserved in the transformed image are straight lines whereas parallelism is maintained in an affine transform.

Projective transformation can be mathematically described as

x’ = H   x,

where x is points in the old coordinate system, x’ is the corresponding points in the transformed image and H is the homography matrix.

Expressing the points x and x’ using the camera intrinsics (K and K’) and its rotation and translation [R t] to the real-world coordinates X and X’, we get

x = K   [R t]   X and x’ = K’   [R’ t’]   X’.

Using the above two equations and the homography relation between x’ and x, we can derive

H = K’   R’   R−1   K−1

The homography matrix H has 8 parameters or degrees of freedom. The homography can be computed using Direct Linear Transform and Singular value decomposition with

A   h = 0,

where A is the matrix constructed using the coordinates of correspondences and h is the one dimensional vector of the 9 elements of the reshaped homography matrix. To get to h we can simple apply SVD: A = U   S   V T And h = V (column corresponding to the smallest singular vector). This is true since h lies in the null space of A. Since we have 8 degrees of freedom the algorithm requires at least four point correspondences. In case when RANSAC is used to estimate the homography and multiple correspondences are available the correct homography matrix is the one with the maximum number of inliers.

Compositing

edit

Compositing is the process where the rectified images are aligned in such a way that they appear as a single shot of a scene. Compositing can be automatically done since the algorithm now knows which correspondences overlap.

Blending

edit

Image blending involves executing the adjustments figured out in the calibration stage, combined with remapping of the images to an output projection. Colors are adjusted between images to compensate for exposure differences. If applicable, high dynamic range merging is done along with motion compensation and deghosting. Images are blended together and seam line adjustment is done to minimize the visibility of seams between images.

The seam can be reduced by a simple gain adjustment. This compensation is basically minimizing intensity difference of overlapping pixels. Image blending algorithm allots more weight to pixels near the center of the image. Gain compensated and multi band blended images compare the best. IJCV 2007.

Straightening is another method to rectify the image. Matthew Brown and David G. Lowe in their paper ‘Automatic Panoramic Image Stitching using Invariant Features’ describe methods of straightening which apply a global rotation such that vector u is vertical (in the rendering frame) which effectively removes the wavy effect from output panoramas. This process is similar to image rectification, and more generally software correction of optical distortions in single photographs.

Even after gain compensation, some image edges are still visible due to a number of unmodelled effects, such as vignetting (intensity decreases towards the edge of the image), parallax effects due to unwanted motion of the optical centre, mis-registration errors due to mismodelling of the camera, radial distortion and so on. Due to these reasons they propose a blending strategy called multi band blending.

Projective layouts

edit
Comparison of Mercator and rectilinear projections
 
Comparing distortions near poles of panosphere by various cylindrical formats.

For image segments that have been taken from the same point in space, stitched images can be arranged using one of various map projections.

Rectilinear

edit

Rectilinear projection, where the stitched image is viewed on a two-dimensional plane intersecting the panosphere in a single point. Lines that are straight in reality are shown as straight regardless of their directions on the image. Wide views – around 120° or so – start to exhibit severe distortion near the image borders. One case of rectilinear projection is the use of cube faces with cubic mapping for panorama viewing. Panorama is mapped to six squares, each cube face showing 90 by 90 degree area of the panorama.

Cylindrical

edit

Cylindrical projection, where the stitched image shows a 360° horizontal field of view and a limited vertical field of view. Panoramas in this projection are meant to be viewed as though the image is wrapped into a cylinder and viewed from within. When viewed on a 2D plane, horizontal lines appear curved while vertical lines remain straight.[10] Vertical distortion increases rapidly when nearing the top of the panosphere. There are various other cylindrical formats, such as Mercator and Miller cylindrical which have less distortion near the poles of the panosphere.

Spherical

edit
 
2D plane of a 360° sphere panorama
(view as a 360° interactive panorama)

Spherical projection or equirectangular projection – which is strictly speaking another cylindrical projection – where the stitched image shows a 360° horizontal by 180° vertical field of view i.e. the whole sphere. Panoramas in this projection are meant to be viewed as though the image is wrapped into a sphere and viewed from within. When viewed on a 2D plane, horizontal lines appear curved as in a cylindrical projection, while vertical lines remain vertical.[10]

Panini

edit

Since a panorama is basically a map of a sphere, various other mapping projections from cartographers can also be used if so desired. Additionally there are specialized projections which may have more aesthetically pleasing advantages over normal cartography projections such as Hugin's Panini projection[11] – named after Italian vedutismo painter Giovanni Paolo Panini[12] – or PTGui's Vedutismo projection.[13] Different projections may be combined in same image for fine tuning the final look of the output image.[14]

Stereographic

edit

Stereographic projection or fisheye projection can be used to form a little planet panorama by pointing the virtual camera straight down and setting the field of view large enough to show the whole ground and some of the areas above it; pointing the virtual camera upwards creates a tunnel effect. Conformality of the stereographic projection may produce more visually pleasing result than equal area fisheye projection as discussed in the stereo-graphic projection's article.

Artifacts

edit
Artifacts due to parallax error
Artifacts due to subject movement

The use of images not taken from the same place (on a pivot about the entrance pupil of the camera)[15] can lead to parallax errors in the final product. When the captured scene features rapid movement or dynamic motion, artifacts may occur as a result of time differences between the image segments. "Blind stitching" through feature-based alignment methods (see autostitch), as opposed to manual selection and stitching, can cause imperfections in the assembly of the panorama.

Software

edit

Dedicated programs include Autostitch, Hugin, Ptgui, Panorama Tools, Microsoft Research Image Composite Editor and CleVR Stitcher. Many other programs can also stitch multiple images; a popular example is Adobe Systems' Photoshop, which includes a tool known as Photomerge and, in the latest versions, the new Auto-Blend. Other programs such as VideoStitch make it possible to stitch videos, and Vahana VR enables real-time video stitching. Image Stitching module for QuickPHOTO microscope software enables to interactively stitch together multiple fields of view from microscope using camera's live view. It can be also used for manual stitching of whole microscopy samples.

See also

edit

References

edit
  1. ^ Mann, Steve; Picard, R. W. (November 13–16, 1994). "Virtual bellows: constructing high-quality stills from video". Proceedings of the IEEE First International Conference on Image Processing. IEEE International Conference. Austin, Texas: IEEE. doi:10.1109/ICIP.1994.413336. S2CID 16153752.
  2. ^ Ward, Greg (2006). "Hiding seams in high dynamic range panoramas". Proceedings of the 3rd Symposium on Applied Perception in Graphics and Visualization. ACM International Conference. Vol. 153. ACM. doi:10.1145/1140491.1140527. ISBN 1-59593-429-4.
  3. ^ Mann, Steve (May 9–14, 1993). Compositing Multiple Pictures of the Same Scene. Proceedings of the 46th Annual Imaging Science & Technology Conference.
  4. ^ S. Mann, C. Manders, and J. Fung, "The Lightspace Change Constraint Equation (LCCE) with practical application to estimation of the projectivity+gain transformation between multiple pictures of the same subject matter Archived 2023-03-14 at the Wayback Machine" IEEE International Conference on Acoustics, Speech, and Signal Processing, 6–10 April 2003, pp III - 481-4 vol.3
  5. ^ Hannuksela, Jari; Sangi, Pekka; Heikkila, Janne; Liu, Xu; Doermann, David (2007). "Document Image Mosaicing with Mobile Phones". 14th International Conference on Image Analysis and Processing (ICIAP 2007). pp. 575–582. doi:10.1109/ICIAP.2007.4362839. ISBN 978-0-7695-2877-9.
  6. ^ Breszcz, M.; Breckon, T. P. (August 2015). "Real-time Construction and Visualization of Drift-Free Video Mosaics from Unconstrained Camera Motion" (PDF). The Journal of Engineering. 2015 (16): 229–240. doi:10.1049/joe.2015.0016. breszcz15mosaic.[permanent dead link]
  7. ^ Szeliski, Richard (2005). "Image Alignment and Stitching" (PDF). Retrieved 2008-06-01.
  8. ^ S. Suen; E. Lam; K. Wong (2007). "Photographic stitching with optimized object and color matching based on image derivatives". Optics Express. 15 (12): 7689–7696. Bibcode:2007OExpr..15.7689S. doi:10.1364/OE.15.007689. PMID 19547097.
  9. ^ d'Angelo, Pablo (2007). "Radiometric alignment and vignetting calibration" (PDF).
  10. ^ a b Wells, Sarah; Gross, Barry; Gross, Michael; Frischer, Bernard; Donavan, Brian; Johnson, Eugene; Martin, Worthy; Reilly, Lisa; Rourke, Will; Stuart, Ken; Tuite, Michael; Watson, Tom; Wassel, Madelyn (2007). "Panorama Creation (Part 1): Methods And Techniques for Capturing Images". IATH Best Practices Guide to Digital Panoramic Photography. Archived from the original on 2008-10-06. Retrieved 2008-06-01.
  11. ^ "The General Panini Projection". PanoTools.org Wiki. 2019-08-21.
  12. ^ German, Daniel M. (2008-12-29). "new panini projection". Google Groups.
  13. ^ New House Internet Services BV. "Projections". PTGui.
  14. ^ Lyons, Max. "PTAssembler Projections". TawbaWare. section "Hybrid Projection".
  15. ^ Littlefield, Rik (2006-02-06). "Theory of the "No-Parallax" Point in Panorama Photography" (PDF). ver. 1.0. Retrieved 2008-06-01.
edit