RGB-D and corrupted images in assistive blind systems in smart cities

Amany Yehia, Shereen A. Taie

Abstract


Assistive blind systems or assistive systems for visually impaired in smart cities help visually impaired to perform their daily tasks faced two problems when using you only look once version 3 (YOLOv3) object detection. Object recognition is a significant technique used to recognize objects with different technologies, algorithms, and structures. Object detection is a computer vision technique that identifies and locates instances of objects in images or videos. YOLOv3 is the most recent object detection technique that introduces promising results. YOLOv3 object detection task is to determine all objects, their location, and their type of objects in the scene at once so it is faster than another object detection technique. This paper solved these two problems red green blue depth (RGB-D) and corrupted images. This paper introduces two novel ways in object detection that improves YOLOv3 technique to deal with corrupted images and RGB-D images. The first phase introduces a new prepossessing model for automatically handling RGB-D on YOLOv3 with an accuracy of 61.50% in detection and 57.02% in recognition. The second phase presents a preprocessing phase to handle corrupted images to use YOLOv3 architecture with high accuracy 77.39% in detection and 71.96% in recognition.


Full Text:

PDF


DOI: https://doi.org/10.11591/eei.v11i4.3770

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats