(49-1) 17 * << * >> * Русский * English * Содержание * Все выпуски
Illustration visual communication based on computer vision image retrieval algorithm
H.Z. Zhang 1
1 School of Art and Design, Minnan Science and Technology College,
362000, China, Kangmei Town, Quanzhou, Kangyuan Road, No. 8
PDF, 1894 kB
DOI: 10.18287/2412-6179-CO-1449
Страницы: 132-140.
Язык статьи: English.
Аннотация:
In illustration design, good visual communication can make the audience resonate. Computer vision image retrieval algorithm provides important support and assistance for the visual communication of illustration. However, the traditional image retrieval algorithm has problems of subjectivity and inaccuracy in complex image classification. Therefore, this paper optimizes the feature extraction module of convolutional neural network and fuses hash algorithm to improve the efficiency and speed of image retrieval. The experimental results show that the accuracy of the improved convolutional neural network is 82.7 %, which is more than 6 percentage points higher than the traditional algorithm model. The recall rate of the volume neural network model improved by hashing algorithm is 94.1 %. Research is of great significance to the visual communication of illustration design, which helps designers to find relevant materials more accurately, improve the artistic quality and ornamental value of their works, and promote the innovation and development of illustration design.
Ключевые слова:
visual communication; image retrieval; convolutional neural network; Hash algorithm.
Благодарности
The research was supported by the Fujian Provincial Department of Education’s 2017 Young and Middle-aged Teacher Education Research Project (Social Sciences) “Research on the Combination of Chinese and Western Architectural Decoration in Modern Overseas Chinese Residential Buildings in Quanzhou” (Project No.: JAS170827).
Citation:
Zhang HZ. Illustration visual communication based on computer vision image retrieval algorithm. Computer Optics 2025; 49 (1): 132-140. DOI: 10.18287/2412-6179-CO-1449.
References:
- Ünver M, Olgun M, Türkarslan E. Cosine and cotangent similarity measures based on Choquet integral for Spherical fuzzy sets and applications to pattern recognition. J Comput Cogn Eng 2022; 1(1): 21-31. DOI: 10.47852/bonviewJCCE2022010105.
- Wang Z, Liu X, Lin J, et al. Multi-attention based cross-domain beauty product image retrieval. Sci China Inf Sci 2020; 63(2): 120112. DOI: 10.1007/s11432-019-2721-0.
- Revathi M, Jeya IJS, Deepa SN. Deep learning-based soft computing model for image classification application. Soft Comput 2020; 24(24): 18411-18430. DOI: 10.1007/s00500-020-05048-7.
- Xu Y, Wei M, Kamruzzaman MM. Inter/intra-category discriminative features for aerial image classification: A quality-aware selection model. Future Gener Comput Syst 2021; 119(4): 77-83. DOI: 10.1016/j.future.2020.11.015.
- Hussain S, Zia MA, Arshad W. Additive deep feature optimization for semantic image retrieval. Expert Syst Appl 2021; 170(1): 114545. DOI: 10.1016/j.eswa.2020.114545.
- Kumar S, Singh MK, Mishra M. Efficient deep feature based semantic image retrieval. Neural Process Lett 2023, 55(3): 2225-2248. DOI: 10.1007/s11063-022-11079-y.
- Taheri F, Rahbar K, Beheshtifard Z. Content-based image retrieval using handcraft feature fusion in semantic pyramid. Int J Multimed Inf Retr 2023, 12(2): 21. DOI: 10.1007/s13735-023-00292-7.
- Wu Q. Image retrieval method based on deep learning semantic feature extraction and regularization softmax. Multimed Tools Appl 2020, 79(13): 9419-9433. DOI: 10.1007/s11042-019-7605-5.
- Naee A, Anees T, Ahmed KT, Naqvi RA, Ahmad S, Whangbo T. Deep learned vectors' formation using auto-correlation, scaling, and derivations with CNN for complex and huge image retrieval. Complex Intell Syst 2023, 9(2): 1729-1751. DOI: 10.1007/s40747-022-00866-8.
- Keisham N, Neelima A. Efficient content-based image retrieval using deep search and rescue algorithm. Soft Comput 2022, 26(4): 1597-1616. DOI: 10.1007/s00500-021-06660-x.
- Wei B, Hao K, Gao L, et al. A biologically inspired visual integrated model for image classification. Neurocomputing 2020; 405(3): 103-113. DOI: 10.1016/j.neucom.2020.04.081.
- Huang ZX, Li J, Hua Z. Underwater image enhancement via LBP-based attention residual network. IET Image Process 2022; 16(1): 158-175. DOI: 10.1049/ipr2.12341.
- Tang J, Zhang J, Yin J. Temporal consistency two-stream CNN for human motion prediction. Neurocomputing 2022; 468: 245-256. DOI: 10.1016/j.neucom.2021.10.011.
- Zhu XM, Chen CX, Zheng B, et al. Automatic recognition of lactating sow postures by refined two-stream RGB-D faster R-CNN. Biosyst Eng 2020; 189: 116-132. DOI: 10.1016/j.biosystemseng.2019.11.013.
- Bai T, Guan H, Wang S, et al. Traditional Chinese medicine entity relation extraction based on CNN with segment attention. Neural Comput & Applicat 2022; 34(4): 2739-2748. DOI: 10.1007/s00521-021-05897-9.
- Behera A, Wharton Z, Keidel A, et al. Deep CNN, body pose and body-object interaction features for drivers' activity monitoring. IEEE Trans Intell Transp Syst 2020; 23(3): 2874-2881. DOI: 10.1109/TITS.2020.3027240.
- Wang H, He J, Zhang X, et al. A short text classification method based on N-Gram and CNN. Chin J Electron 2020; 29(2): 248-254. DOI: 10.1049/cje.2020.01.001.
- Jiang X, Wang N, Xin J, et al. Image super-resolution via multi-view information fusion networks. Neurocomputing 2020; 402: 29-37. DOI: 10.1016/j.neucom.2020.03.073.
© 2009, IPSI RAS
Россия, 443001, Самара, ул. Молодогвардейская, 151; электронная почта: journal@computeroptics.ru; тел: +7 (846) 242-41-24 (ответственный секретарь), +7 (846) 332-56-22 (технический редактор), факс: +7 (846) 332-56-20