(48-6) 15 * << * >> * Russian * English * Content * All Issues
Method of automatic coregistration of digital remote sensing images from different sources
A.N. Borisov 1, V.V. Myasnikov 1, V.V. Sergeev 1
1 Samara National Research University,
443086, Samara, Russia, Moskovskoye Shosse 34
PDF, 2226 kB
DOI: 10.18287/2412-6179-CO-1604
Pages: 932-943.
Full text of article: Russian language.
Abstract:
In this paper, a method for the automatic alignment of diverse digital Earth remote sensing images using survey data is proposed. The method is designed to align color, grayscale, multispectral, and radar images, as well as their combinations, with potential differences in spatial resolution of up to four times (optionally – sixteen times). The main stages of the proposed method include: an optional upscaling (up to four times); an optional number of image channels reduction to three or one; keypoint detection, their description and alignment. To achieve a universal and robust solution in the latter stages, the best-known algorithms were compared: SIFT, SAR-SIFT, RIFT, and the trainable RoMa. Experimental studies using the indicated types of space images demonstrate a clear advantage of the trainable neural network model RoMa trained on a variety of heterogeneous images. For additional improvement of the alignment accuracy, we utilized a priori data about the images in the form of their georeferencing information.
Keywords:
digital remote sensing images, automatic image coregistration, multispectral images, radar images.
Citation:
Borisov AN, Myasnikov VV, Sergeev VV. Method of automatic coregistration of digital remote sensing images from different sources. Computer Optics 2024; 48(6): 932-943. DOI: 10.18287/2412-6179-CO-1604.
Acknowledgements:
This work was supported by the Ministry of Science and Higher Education, Russia (Agreement No.075-15-2024-558).
References:
- Horn BKP. Robot vision. Cambridge: MIT Press; 1986. ISBN: 978-0-262-08159-7.
- Forsyth DA, Ponce J. Computer vision: A modern approach. New Jersey: Prentice Hall; 2002. ISBN: 978-0-13-085198-7.
- Davies ER, Turk MA, eds. Advanced methods and deep learning in computer vision. San Diego, CA: Academic Press; 2022. ISBN: 978-0-12-822109-9.
- Dellinger F, Delon J, Gousseau Y, Michel J, Tupin F. SAR-SIFT: A SIFT-like algorithm for SAR images. IEEE Trans Geosci Remote Sens 2015; 53(1): 453-466. DOI: 10.1109/TGRS.2014.2323552.
- Ma W, Wen Z, Wu Y, Jiao L, Gong M, Zheng Y, Liu L. Remote sensing image registration with modified SIFT and enhanced feature matching. IEEE Geosci Remote Sens Lett 2017; 53(1): 3-7. DOI: 10.1109/LGRS.2016.2600858.
- Xiang Y, Wang F, You H. OS-SIFT: A robust SIFT-Like Algorithm for High-Resolution Optical-to-SAR image regis-tration in Suburban areas. IEEE Trans Geosci Remote Sens 2018; 56(6): 3078-3090. DOI: 10.1109/TGRS.2018.2790483.
- Zhao J, Yang D, Li Y, Xiao P, Yang J. Intelligent matching method for heterogeneous remote sensing images based on style transfer. IEEE J Sel Top Appl Earth Obs Remote Sens 2022; 15: 6723-6731. DOI: 10.1109/JSTARS.2022.3197748.
- Chen J, Xie H, Zhang L, Hu J, Jiang H, Wang G. SAR and optical image registration based on deep learning with co-attention matching module. Remote Sensing 2023; 15(15): 3879. DOI: 10.3390/rs15153879.
- Brunelli R. Template matching techniques in computer vision: Theory and practice. Chivhester, UK: John Wiley & Sons Ltd; 2009. ISBN: 978-0-470-51706-2.
- Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 1981; 24(6): 381-395. DOI: 10.1145/358669.358692.
- Chum O, Matas J. Matching with PROSAC – progressive sample consensus. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition 2005: 220-226. DOI: 10.1109/CVPR.2005.221.
- Lowe DG. Object recognition from local scale-invariant features. Proc Seventh IEEE Int Conf on Computer Vision 1999: 1150-1157. DOI: 10.1109/ICCV.1999.790410.
- P Yan K, Sukthankar R. PCA-SIFT: a more distinctive representation for local image descriptors. Proc 2004 IEEE Computer Society Conf on Computer Vision and Pattern Recognition 2004: II-II. DOI: 10.1109/CVPR.2004.1315206.
- Ghassabi Z, Shanbehzadeh J, Sedaghat A, Fatemizadeh E. An efficient approach for robust multimodal retinal image regis-tration based on UR-SIFT features and PIIFD descriptors. EURASIP J Image Video Process 2013; 2013: 25. DOI: 10.1186/1687-5281-2013-25.
- Rublee E, Rabaud V, Konolige K, Bradski G. ORB: An efficient alternative to SIFT or SURF. Proc 2011 Int Conf on Computer Vision 2011: 2564-2571. DOI: 10.1109/ICCV.2011.6126544.
- Bay H, Tuytelaars T, Van Gool L. SURF: Speeded up robust features. In Book: Leonardis A, Bischof H, Pinz A, eds. Computer Vision – ECCV 2006. 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, Proceedings, Part I. Berlin, Heidelberg: Springer-Verlag; 2006: 404-417. DOI: 10.1007/11744023_32.
- Rosten E, Drummond T. Machine learning for high-speed corner detection. In Book: Leonardis A, Bischof H, Pinz A, eds. Computer Vision – ECCV 2006. 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, Proceedings, Part I. Berlin, Heidelberg: Springer-Verlag; 2006: 430-443. DOI: 10.1007/11744023_34.
- Yi KM, Trulls E, Lepetit V, Fua P. LIFT: learned invariant feature transform. In Book: Leibe B, Matas J, Sebe N, Welling M, eds. Computer Vision – ECCV 2016. 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VI. Cham, Switzerland: Springer International Publishing AG; 2016: 467-483. DOI: 10.1007/978-3-319-46466-4_28.
- Revaud J, De Souza C, Humenberger M, Weinzaepfel P. R2D2: Reliable and repeatable detector and descriptor. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1906.06195>. DOI: 10.48550/arXiv.1906.06195.
- DeTone D, Malisiewicz T, Rabinovich A. Superpoint: Self-supervised interest point detection and description. 2018 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018: 224-236. DOI: 10.1109/CVPRW.2018.00060.
- Tyszkiewicz M, Fua P, Trulls E. DISK: Learning local features with policy gradient. NIPS'20: Proc 34th Int Conf on Neural Information Processing Systems 2020; 33: 14254-14265.
- Zhao X, Wu X, Chen W, Chen PC, Xu Q, Li Z. Aliked: A lighter keypoint and descriptor extraction network via de-formable transformation. IEEE Trans Instrum Meas 2023; 72: 5014016. DOI: 10.1109/TIM.2023.3271000.
- Sun J, Shen Z, Wang Y, Bao H, Zhou X. LoFTR: Detector-free local feature matching with transformers. 2021 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2021: 8918-8927. DOI: 10.1109/CVPR46437.2021.00881.
- Rocco I, Arandjelović R, Sivic J. Efficient neighbourhood consensus networks via submanifold sparse convolutions. In Book: Vedaldi A, Bischof H, Brox T, Frahm JM, eds. Computer Vision – ECCV 2020. 16th European Con-ference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX. Cham, Switzerland: Springer Nature Switzerland AG; 2020: 605-621. DOI: 10.1007/978-3-030-58545-7_35.
- Liu C, Yuen J, Torralba A. SIFT flow: Dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell 2011; 33(5): 978-994. DOI: 10.1109/TPAMI.2010.147.
- Edstedt J, Athanasiadis I, Wadenbäck M, Felsberg M. DKM: Dense kernelized feature matching for geometry estimation. 2023 IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2023: 17765-17775. DOI: 10.1109/CVPR52729.2023.01704.
- Edstedt J, Sun Q, Bökman G, Wadenbäck M, Felsberg M. RoMa: Robust dense feature matching. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition 2024. p. 19790-19800.
- Dong Y, Jiao W, Long T, He G, Gong C. An extension of phase correlation-based image registration to estimate similarity transform using multiple polar fourier transform. Remote Sens 2018; 10(11): 1719. DOI: 10.3390/rs10111719.
- Chen HM, Arora MK, Varshney PK. Mutual information-based image registration for remote sensing data. Int J Remote Sens 2003; 24(18): 3701-3706. DOI: 10.1080/0143116031000117047.
- Hel-Or Y, Hel-Or H, David E. Matching by tone mapping: Photometric invariant template matching. IEEE Trans Pattern Anal Mach Intell 2014; 36(2): 317-330. DOI: 10.1109/TPAMI.2013.138.
- Stumpf A, Michéa D, Malet J-P. Improved Co-Registration of Sentinel-2 and Landsat-8 Imagery for Earth surface motion measurements. Remote Sens 2018; 10(2): 160. DOI: 10.3390/rs10020160.
- Rengarajan R, Choate M, Hasan MN, Denevan A. Co-registration accuracy between Landsat-8 and Sentinel-2 orthorectified products. Remote Sens Environ 2024; 301: 113947. DOI: 10.1016/j.rse.2023.113947.
- 2021 IEEE GRSS Data Fusion Contest Track DSE. 2024. Source: <https://www.grss-ieee.org/community/technical-committees/2021-ieee-grss-data-fusion-contest-track-dse/>.
- Li J, Li Y, He L, Chen J, Plaza A. Spatio-temporal fusion for remote sensing data: An overview and new benchmark. Sci China Inf Sci 2020; 63: 140301. DOI: 10.1007/s11432-019-2785-y.
- Xie H, Pierce LE, Ulaby FT. Mutual information based registration of SAR images. 2003 IEEE Int Geoscience and Remote Sensing Symposium (IGARSS 2003) 2003; 6: 4028-4031. DOI: 10.1109/IGARSS.2003.1295351.
- Wang Y, Yu Q, Yu W. An improved Normalized Cross Correlation algorithm for SAR image registration. 2012 IEEE Int Geoscience and Remote Sensing Symposium 2012: 2086-2089. DOI: 10.1109/IGARSS.2012.6350961
- Hu C, Zhu R, Sun X, Li X, Xiang D. Optical and SAR image registration based on pseudo-SAR image generation strategy. Remote Sens 2023; 15(14): 3528. DOI: 10.3390/rs15143528.
- Li J, Hu Q, Ai M. RIFT: Multi-modal image matching based on radiation-invariant feature transform. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1804.09493>. DOI: 10.48550/arXiv.1804.09493.
- Ye Y, Bruzzone L, Shan J, Bovolo F, Zhu Q. Fast and robust matching for multimodal remote sensing image registration. IEEE Trans Geosci Remote Sens 2019; 57(11): 9059-9070. DOI: 10.1109/TGRS.2019.2924684.
- Harris C, Stephens M. A combined corner and edge detector. Proc 4th Alvey Vision Conf 1988: 147-151. DOI: 10.5244/c.2.23.
- Schmitt M, Hughes LH, Zhu XX. The SEN1-2 dataset for deep learning in SAR-optical data fusion. ISPRS Ann Photogramm Remote Sens Spatial Inf Sci 2018; IV-1: 141-146. DOI: 10.5194/isprs-annals-IV-1-141-2018.
- Schmitt M, Hughes LH, Qiu C, Zhu XX. SEN12MS – A curated dataset of georeferenced multi-spectral Sentinel-1/2 imagery for deep learning and data fusion. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1906.07789>. DOI: 10.48550/arXiv.1906.07789.
- Xiang Y, Tao R, Wang F, You H, Han B. Automatic registration of optical and SAR images via improved phase congruency model. IEEE J Sel Top Appl Earth Obs Remote Sens 2020; 13: 5847-5861. DOI: 10.1109/JSTARS.2020.3026162.
- Chen Z, Zhang L, Zhang G. An improved InSAR image co-registration method for pairs with relatively big distortions or large incoherent areas. Sensors 2016; 16(9): 1519. DOI: 10.3390/s16091519.
- Pearson K. On lines and planes of closest fit to systems of points in space. Philos Mag 1901; 2(11): 559-572. DOI: 10.1080/14786440109462720.
- Ye Y, Yang C, Zhu B, Zhou L, He Y, Jia H. Improving co-registration for Sentinel-1 SAR and Sentinel-2 optical images. Remote Sens 2021; 13(5): 928. DOI: 10.3390/rs13050928.
© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20