Comparison between convolutional neural network and K-nearest neighbours object detection for autonomous drone

Annisa Istiqomah Arrahmah, Rissa Rahmania, Dany Eka Saputra

Abstract


In autonomous drones, the drone’s ability to move depends on the drone’s capacity to know its position, either in relative or absolute position. The Pinhole model is one of the methods to calculate a drone’s relative position based on the triangle similarity concept using a single camera. This method utilizes bounding box information generated from an object detection algorithm. Thus, accuracy of the generated bounding box is crucial, and selection of object detection algorithm is necessary. This paper compares and evaluates machine learning and deep learning object detection methods to determine which method is suitable for distance measurement using a single camera for autonomous drone’s controller based on pinhole model. A novel K-nearest neighbours-based (KNN-based) object detection is constructed to represent the machine learning method while you only look once version 5 (YOLOv5) convolutional neural network (CNN) architecture is selected to represent the deep learning method. A dataset consists of two different classes, with a total of 1520 images, collected from the unmanned aerial vehicle (UAV) camera for training and evaluation purposes. Confusion matrix and intersection over union (IoU)/generalized intersection of union (GIoU) matrix are used to evaluate the performance of both methods. The result of this paper shows the performance of each system and concludes the suitable type of object detection algorithm for the autonomous UAV purpose.

Keywords


Autonomous drones; Convolutional neural network; K-nearest neighbours; Object detection; Pinhole model

Full Text:

PDF


DOI: https://doi.org/10.11591/eei.v11i4.3784

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats