Papers
Topics
Authors
Recent
Search
2000 character limit reached

High-quality Panorama Stitching based on Asymmetric Bidirectional Optical Flow

Published 1 Jun 2020 in cs.CV, cs.GR, and eess.IV | (2006.01201v3)

Abstract: In this paper, we propose a panorama stitching algorithm based on asymmetric bidirectional optical flow. This algorithm expects multiple photos captured by fisheye lens cameras as input, and then, through the proposed algorithm, these photos can be merged into a high-quality 360-degree spherical panoramic image. For photos taken from a distant perspective, the parallax among them is relatively small, and the obtained panoramic image can be nearly seamless and undistorted. For photos taken from a close perspective or with a relatively large parallax, a seamless though partially distorted panoramic image can also be obtained. Besides, with the help of Graphics Processing Unit (GPU), this algorithm can complete the whole stitching process at a very fast speed: typically, it only takes less than 30s to obtain a panoramic image of 9000-by-4000 pixels, which means our panorama stitching algorithm is of high value in many real-time applications. Our code is available at https://github.com/MungoMeng/Panorama-OpticalFlow.

Citations (10)

Summary

  • The paper introduces an asymmetric bidirectional optical flow blending algorithm that significantly reduces pixel misalignment in 360-degree panoramas.
  • It employs a two-stage pipeline with distortion correction followed by GPU-accelerated optical flow using a Lucas-Kanade pyramid for accurate registration and blending.
  • Experimental evaluations demonstrate a reduction in mean misalignment to 8.2 pixels, outperforming previous methods and enabling near real-time high-resolution stitching.

High-quality Panorama Stitching via Asymmetric Bidirectional Optical Flow

Introduction

This paper presents an advanced panorama stitching framework leveraging asymmetric bidirectional optical flow to address alignment artifacts and stitching seams in 360-degree image compositing. The algorithm specifically targets multi-view inputs from fisheye lens cameras, correcting for both geometric and photometric inconsistencies prior to blending. The approach focuses on pixel-level image blending to handle parallax, exposure discrepancies, and substantial scene variation, deploying a highly optimized, GPU-accelerated pipeline.

Methodology

The proposed method decomposes the stitching pipeline into two principal stages: pre-processing and optical flow-based blending.

Pre-processing

Input images undergo distortion correction to counteract the fisheye effect and chromaticity correction to harmonize photometric properties. Coarse registration is performed via standard feature-based image alignment (Hugin toolkit), establishing approximate camera poses and transformation matrices. Pre-processing, as implemented, leverages established open-source tools, ensuring baseline geometric coherence across input images.

Asymmetric Bidirectional Optical Flow Image Blending

The core contribution is a blending algorithm utilizing asymmetric bidirectional optical flow between adjacent image pairs. Unlike prior methods relying on unidirectional or symmetric flows, this approach recognizes and exploits the intrinsic asymmetry observed in practical optical flow computation, countering error accumulation and bias.

For overlapping regions between each image pair, the method computes:

  • Forward optical flow (OverlappedR relative to OverlappedL)
  • Backward optical flow (OverlappedL relative to OverlappedR)

The blending region is parameterized by a global coefficient array, Blend, which encodes spatial proximity of each pixel in the overlap to unblended regions, thus controlling local interpolation weights. For every pixel in the blended output, softmax-weighted fusion is applied, modulating color contributions from each original image as a function of flow magnitude and proximity. The GPU-customized Lucas-Kanade pyramid algorithm is employed to efficiently compute optical flows, optimizing for speed without significant quality compromise.

The process is iterated over all image pairs, enabling global panoramic synthesis.

Experimental Evaluation

Quantitative evaluation utilizes average pixel misalignment measured at several landmark positions for four representative cases (A–D), across 20 panoramic outputs of 9000×4000 pixels each. Compared to APAP [10] and the optical flow-based method of Li et al. [16], the proposed algorithm consistently achieves lower mean misalignment values:

Algorithm Mean Pixel Misalignment
APAP [10] 37.7
Li et al. [16] 17.0
Proposed (This Paper) 8.2

The method yields visually apparent improvements in seamlessly integrating image pairs, even under large baseline or parallax conditions. The computational efficiency is also notable: full-resolution 9000×4000 panoramic assembly completes in under 30 seconds on a GPU, confirming suitability for near real-time deployment.

Implications and Future Developments

The integration of asymmetric bidirectional optical flow into the stitching pipeline advances the practical quality boundary for high-resolution panoramic synthesis, particularly in scenes with non-ideal acquisition geometry and significant appearance variation. The algorithm's architecture—partitioning classical registration and local pixel-wise blending—favors extensibility. Future enhancements could target the coarse registration phase, incorporating recent progress in robust feature matching or learning-based global alignment, which promises further reduction of residual misalignment.

The emphasis on pixel-level, flow-guided blending anticipates expanded adoption in VR/AR pipelines, immersive video, and telepresence domains, where perceptual continuity and real-time responsiveness are required. Integrating deep learned optical flow estimators or hybridizing flow with depth inference presents a logical progression to further handle occlusions or extreme parallax.

Conclusion

The paper introduces and validates a panorama stitching method based on asymmetric bidirectional optical flow, demonstrating superior seam minimization and alignment accuracy over contemporary alternatives. The algorithm achieves real-time, high-fidelity 360-degree image assembly through a judicious combination of classical registration and advanced, locally-adaptive optical flow blending, setting a new standard for automated panoramic imaging and suggesting multiple trajectories for enhanced registration or blending in subsequent research.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.