(44-3) 14 * << * >> * Russian * English * Content * All Issues

Visual attention method based on vertex ranking of graphs by heterogeneous image attributes
A.A. Zakharov 1, D.V. Titov 2, A.L. Zhiznyakov 1, V.S. Titov 2

Murom Institute (branch), Vladimir State University named after Alexander and Nikolay Stoletovs, Murom, Russia,
Southwest State University, Kursk, Russia

 PDF, 2280 kB

DOI: 10.18287/2412-6179-CO-658

Pages: 427-435.

Full text of article: Russian language.

Abstract:
The paper discusses a method of visual attention based on vertex ranking of graphs on the basis of image features. The aim of the research is to develop a method that allows high-precision detection of objects in images with low color contrast between the selected and background areas. The image is pre-segmented into regions to calculate the saliency map. The graph is based on regions. Each region is associated with related regions, as well as with areas adjacent to adjacent regions. The regions are vertices of the graph. The vertices of the graph are ranked according to the characteristics of the corresponding image areas. The scope is highlighted based on requests from background areas. The saliency map is determined based on background area queries. Regions adjacent to the edges of the image belong to the background areas. Color features of the image were used in the existing approach of visual attention based on the manifold ranking. Texture features and shape features are additionally used in the proposed method to improve accuracy. Gabor's energy function is used to calculate texture features. The distance between centers of the regions is calculated by analyzing the form. The proposed method has shown good results for detecting objects in images in which the background color and object color are in similar ranges. The experimental results are presented on test images. Precision-recall curves showing the advantage of the developed method are constructed.

Keywords:
image analysis, visual attention, graph, image attributes, ranking, computer vision.

Citation:
Zakharov AA, Titov DV, Zhiznyakov AL, Titov VS. Visual attention method based on vertex ranking of graphs by heterogeneous image attributes. Computer Optics 2020; 44(3): 427-435. DOI: 10.18287/2412-6179-CO-658.

Acknowledgements:
This work was financially supported by the Ministry of Science and Higher Education of the Russian Federation (State task of VlSU GB-1187/20).

References:

  1. Koch V, McLean J, Segev R, Freed MA, Berry MJ, Balasubramanian V, Sterling P. How much the eye tells the brain. Current Biology 2006; 16(14): 1428-1434.
  2. Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Machine Intell 2013; 35(1): 185-207.
  3. Begum M, Karray F. Visual attention for robotic cognition: A survey. IEEE Trans Auton Ment Dev 2011; 3(1): 92-105.
  4. Mahdi A, Su M, Schlesinger M, Qin J. A comparison study of saliency models for fixation prediction on infants and adults. IEEE T Cogn Dev Syst 2018; 10(3): 485-498.
  5. Garg A, Negi A. A survey on visual saliency detection and computational methods. Int J Eng Technol 2017; 9(4): 2742-2753.
  6. Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Machine Intell 1998; 20(11): 1254-1259.
  7. Frintrop S. VOCUS: a visual attention system for object detection and goal-directed search. Heidelberg, Germany: Springer-Verlag; 2006.
  8. Itti L, Dhavale N, Pighin F. Realistic avatar eye and head animation using a neurobiological model of visual attention. Proc SPIE 2003; 5200: 64-78.
  9. Wang J, Da Silva MP, Le Callet P, Ricordel V. Computational model of stereoscopic 3D visual saliency. IEEE Trans Image Process 2013; 22(6): 2151-2165.
  10. Harel J, Koch C, Perona P. Graph-based visual saliency. Neural Inform Process Syst 2006; 19: 545-552.
  11. Salvucci DD. An integrated model of eye movements and visual encoding. Cogn Syst Res 2001; 1: 201-220.
  12. Tatler BW. The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor bases and image feature distributions. J Vis 2007; 14: 1-17.
  13. Vijayakumar S. Overt visual attention for a humanoid robot. Proceedings IEEE/RSJ international conference on intelligent robots and systems 2001.
  14. Kadir T, Brady M. Saliency, scale and image description. Int J Comp Vis 2001; 45(2): 83-105.
  15. Kootstra G, Nederveen A, de Boer B. Paying attention to symmetry. British Machine Vis Conf 2008: 1115-1125.
  16. Parkhurst D, Law K, Niebur E. Modeling the role of salience in the allocation of overt visual attention. Vis Res 2002; 42(1): 107-123.
  17. Plastinin AI, Khramov AG, Soifer VA. Texture defects detection on microscale images of materials. Computer Optics 2011; 35(2): 158-165.
  18. Vizilter YV, Gorbatsevich VS, Vishnyakov BV, Sidyakin SV. Object detection in images using morphlet descriptions. Computer Optics 2017; 41(3): 406-411. DOI: 10.18287/2412-6179-2017-41-3-406-411.
  19. Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection. IEEE Trans Pattern Anal Machine Intell 2012; 34(10): 1915-1926.
  20. Erdem E, Erdem A. Visual saliency estimation by nonlinearly integrating features using region covariances. J Vis 2013; 13(4): 11.
  21. Li X, Lu H, Zhang L, Ruan X, Yang M-H. Saliency detection via dense and sparse reconstruction. IEEE Int Conf Comp Vis 2013: 2976-2983.
  22. Tavakoli HR, Rahtu E., Heikkila J. Fast and efficient saliency detection using sparse sampling and kernel density estimation. Scandinavian Conf Image Anal 2011: 666-675.
  23. Yang C, Zhang L, Lu H. Graph-regularized saliency detection with convex-hull-based center prior. IEEE Sign Process Lett 2013; 20(7): 637-640.
  24. Jiang B, Zhang L, Lu H, Yang C, Yang MH. Saliency detection via absorbing Markov chain. IEEE Int Conf Comp Vis 2013: 1665-1672.
  25. Margolin R, Tal A, Zelnik-Manor L. What makes a patch distinct? IEEE Conf Comp Vis Pattern Recogn 2013: 1139-1146.
  26. Rahtu E, Kannala J, Salo M, Heikkila J. Segmenting salient objects from images and videos. European Conf Comp Vis 2010: 366-379.
  27. Seo HJ, Milanfar P. Static and space-time visual saliency detection by self-resemblance. J Vis 2009; 9(12): 15.
  28. Murray N, Vanrell M, Otazu X, Parraga CA. Saliency estimation using a non-parametric low-level vision model. IEEE Conf Comp Vis Pattern Recogn 2011: 433-440.
  29. Hou X, Zhang L. Saliency detection: a spectral residual approach. IEEE Conf Comp Vis Pattern Recogn 2007: 1-8.
  30. Zhang L, Tong MH, Marks TK, Shan H, Cottrell GW. Sun: A Bayesian framework for saliency using natural statistics. J Vis 2008; 8(7): 32.
  31. Duan L, Wu C, Miao J, Qing L, Fu Y. Visual saliency detection by spatially weighted dissimilarity. IEEE Conf Comp Vis Pattern Recogn 2011: 473-480.
  32. Tsotsos JK, Culhane S, Winky Y, Yuzhong L, Davis N, Nuflo F. Modeling visual attention via selective tuning. Artificial Intelligence 1995; 78: 507-545.
  33. Zhao R, Ouyang W, Li H, Wang X. Saliency detection by multi-context deep learning. IEEE Conf Comp Vis Pattern Recogn 2015: 1265-1274.
  34. Almeida AF, Figueiredo R, Bernardino A, Santos-Victor J. Deep networks for human visual attention: a hybrid model using foveal vision. ROBOT 2017: 3rd Iberian Robotics Conference 2017; 117-128.
  35. Wang W, Shen J. Deep visual attention prediction. IEEE Trans Image Process 2018; 27(5): 2368-2378.
  36. Zhou D, Weston J, Gretton A, Bousquet O, Scholkopf B. Ranking on data manifolds. NIPS'03 2004: 169-176.
  37. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Machine Intell 2012; 34(11): 2274-2282.
  38. Yang C, Zhang L, Lu H, Ruan X, Yang M. Saliency detection via graph-based manifold ranking. IEEE Conf Comp Vis Pattern Recogn 2013: 3166-3173.
  39. Andrysiak T, Choras M. Image retrieval based on hierarchical Gabor filters. Int J Appl Math Comp Sci 2005; 15(4): 471-480.
  40. Randen T, Husoy JH. Filtering for texture classification: A comparative study. IEEE Trans Pattern Anal Machine Intell 1999; 21(4): 291-310.
  41. Borji A, Cheng M-M, Jiang H, Li J. Salient object detection: A benchmark. IEEE Trans Image Process 2015; 24(12): 5706-5723.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: ko@smr.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20