(48-2) 14 * << * >> * Russian * English * Content * All Issues

Investigation of an object-detection approach for estimating the rock fragmentation in the open-pit conditions
K. Reshetnikov 1, M. Ronkin 1, S. Porshnev 1

Federal State Autonomous Educational Institution of Higher Education
«Ural Federal University named after the first President of Russia B.N.Yeltsin»,
620002, Ekaterinburg, Russia, Mira str. 19

 PDF, 1696 kB

DOI: 10.18287/2412-6179-CO-1382

Pages: 272-281.

Full text of article: Russian language.

Abstract:
Optimization of open-pit mining is one of significant tasks to date, with the blasting quality estimation being a key factor. The blasting quality is determined through evaluating the number of fragments and block size distribution, the so-called fragmentation task. Currently, computer vision-based methods using instance or semantic segmentation approaches are most widely applied in the task. However, in practice, such approaches require a lot of computational resources. Because of this, the use of alternative techniques based on algorithms for the real-time object detection is highly relevant. The paper studies the use of YOLO family architectures for solving the task of the blasting quality assessment. Based on the research results, YOLOv7x architecture is proposed as a baseline model. The proposed neural network architecture was trained on a dataset selected by the present authors from digital images of blasted open-pit block fragments, which consisted of 220 images. The obtained results also allow one to suggest the geometrical size of rock chunks as a measure of blasting quality.

Keywords:
fragmentation, deep learning, object detection, computer vision, open-pit, blast quality estimation.

Citation:
Reshetnikov KI, Ronkin MV, Porshnev SV. Investigation of an object-detection approach for estimating the rock fragmentation in the open-pit conditions. Computer Optics 2024; 48(2): 272-281. DOI: 10.18287/2412-6179-CO-1382.

Acknowledgements:
This research was financially supported by the Russian Science Foundation and Government of Sverdlovsk region under joint grant No 22-21-20051, https://rscf.ru/en/project/22-21-20051/.

References:

  1. Luzin V. Complex studies of longitudinal-fiber chrysotile asbestos of the Bazhenov deposit [In Russian]. Source: <http://resources.krc.karelia.ru/krc/doc/publ2011/miner_tech_ocenka_118-126.pdf>.
  2. Shrivastava S, Bhattacharjee S, Debasis D. Segmentation of mine overburden dump particles from images using Mask R CNN. Sci Rep 2023; 13: 2046. DOI: 10.1038/s41598-023-28586-0.
  3. Vu T, Bao T, Hoang Q, Drebenstetd C, Hoa P, Thang H. Measuring blast fragmentation at Nui Phao open-pit mine, Vietnam using the Mask R-CNN deep learning model. Mining Technology 2022; 130(4): 232-243. DOI: 10.1080/25726668.2021.1944458.
  4. Mohammad B, Mohammad A, Farhang S, Farzad S, Sadjad M. A new framework for evaluation of rock fragmen-tation in open pit mines. J Rock Mech Geotech Eng 2019; 11(2): 325-336.
  5. Bamford T, Esmaeili K, Schoellig A. A deep learning approach for rock fragmentation analysis. Int J Rock Mech Min Sci 2021; 145: 104839.
  6. Jung D, Choi Y. Systematic review of machine learning applications in mining: Exploration, exploitation, and reclamation. Minerals 2021; 11(2): 148.
  7. Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016.
  8. Redmon J, Farhadi A. YOLOv3: An Incremental Improvement. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1804.02767>.
  9. ultralytics/yolov5. 2020. Source: <https://github.com/ultralytics/yolov5>.
  10. Wang W, Li Q, Xiao C, Zhang D, Miao L, Wang L. An improved boundary-aware U-Net for ore image semantic segmentation. Sensors 2021; 21(8): 2615. DOI: 10.3390/s21082615.
  11. Li H, Pan C, Chen Z, Wulamu A, Yang A. Ore image segmentation method based on U-Net and Watershed. Comput Mater Contin 2020, 65(1), 563-578.
  12. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In Book: Medical image computing and computer-assisted intervention – MICCAI 2015. Pt III. Cham, Heidelberg: Springer International Publishing Switzerland; 2015: 234-241.
  13. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. 2017 IEEE Int Conf on Computer Vision (ICCV) 2017: 2980-2988. DOI: 10.1109/ICCV.2017.322.
  14. Ramesh S, Kumar V. A review on instance segmentation using mask R-CNN (December 24, 2020). Proc Int Conf on Systems, Energy & Environment (ICSEE) 2021: 1-4. DOI: 10.2139/ssrn.3794272.
  15. CivilNode. Gold Size 2.0 Download. Source:  <https://civilnode.com/download-software/10159053855788/gold-size-20>.
  16. Fitzgibbon A, Pilu M, Fisher RB. Direct least square fitting of ellipses. IEEE Trans Pattern Anal Mach Intell 1999; 21(5): 476-480. DOI: 10.1109/34.765658.
  17. Li M, Wang X, Yao H, Saxén H, Yu Y. Analysis of particle size distribution of coke on blast furnace belt using object detection. Processes 2022; 10(10): 1902.
  18. Gu W, Bai S, Kong L. A review on 2D instance segmentation based on deep neural networks. Image Vis Comput 2022; 120: 104401.
  19. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. 2016 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 770-778.
  20. Schenk F, Tscharf A, Mayer G, Fraundorfer F. Automatic muck pile characterization from uav images. ISPRS Ann Photogramm Remote Sens Spat Inf Sci 2019; IV-2/W5: 163-170.
  21. Redmon J, Divvala S, Girshick R, Farhadi F. You only look once: Unified, real-time object detection. 2016 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 779-788.
  22. Zyuzin V, Ronkin M, Porshnev S, Kalmykov A. Auto-matic asbestos control using deep learning based computer vision system. Appl Sci 2021; 11(22): 10532. DOI: 10.3390/app112210532.
  23. Mendeley Data. openpits asbestos. Source:  <https://data.mendeley.com/datasets/pfdbfpfygh/2>.
  24. Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 6517-6525. DOI: 10.1109/CVPR.2017.690.
  25. Bochkovskiy A, Wang C, Liao H. Yolov4: Optimal speed and accuracy of object detection. arXiv Preprint. 2020. Source: <https://arxiv.org/abs/2004.10934>.
  26. Diwan T, Anirudh G, Tembhurne JV. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed Tools Appl 2022; 82: 9243-9275.
  27. Jiang P, Ergu D, Liu F, Cai Y, Ma B. A review of Yolo algorithm developments. Procedia Comput Sci 2022; 199: 1066-1073.
  28. Wang C, Bochkovskiy A, Liao H. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2207.02696>.
  29. Jocher G. What is YOLOv8? The ultimate guide. 2023. Source: <https://blog.roboflow.com/whats-new-in-yolov8/>.
  30. Li C, et al. YOLOv6: A single-stage object detection frame-work for industrial applications. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2209.02976>.
  31. CVAT. Open Data Annotation Platform. Source: <https://cvat.ai/>.
  32. Racine JS. An introduction to the advanced theory and practice of nonparametric econometrics. Cambridge University Press; 2019.
  33. Real-time object detection. 2023. Source: <https://paperswithcode.com/task/real-time-object-detection>.
  34. facebookresearch/detectron2. 2019. Source: <https://github.com/facebookresearch/detectron2>.
  35. Hui J. mAP (mean Average Precision) for object detec-tion. Source: <https://jonathan-hui.medium.com/map-mean-average-precision-for-object-detection-45c121a31173>.
  36. Ramdas A, Garcia N, Cuturi M. On wasserstein two sam-ple testing and related families of nonparametric tests. arXiv Preprint. 2015. Source: <https://arxiv.org/abs/1509.02237>.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20