[ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ]

Journal of Information Science and Engineering, Vol. 36 No. 5, pp. 1055-1067

Interactive Robotic Testbed for Performance Assessment of Machine Learning based Computer Vision Techniques

Cyber Physical Systems Lab, Department of Electronics
Cochin University of Science and Technology
Kerala, 682022 India
E-mail: fnithinpb180; albertfrancis32632g@gmail.com; fajaichemmanam; bijoyjoseg@cusat.ac.in

+Department of Computer Science and Engineering
Indian Institute of Technology Patna
Bihar, 801103 India
E-mail: jimson@iitp.ac.in

Computer vision, a widely researched topic over the years, got a shot in the arm with the arrival of high performance and cloud computing. Online and offline techniques for object detection, recognition and tracking have a huge impact in real-world applications such as video surveillance, biometric authentication and targeted advertising. With machine learning, conventional feature extraction based implementation has given way to the model based implementations. This demands high compute speed to keep up with complex trained models. Computer vision with Machine learning solved some of the traditional problems like image classification and is now offering new unique problems in image processing such as object tracking, object segmentation etc: Performance assessment of various computer vision applications in object tracking, when used with machine learning solutions, is a high priority. With this intent, we propose a robotic testbed for various computer vision applications such as face recognition, tracking, gesture detection, character recognition, etc. It has a hardware tracking system based on face detection and recognition. A fully functional robot with a table lamp design is made to work with these applications using multiple algorithms and their performance parameters are compared. Since a low compute power setup is used, the robot can work properly only on optimized implementations. Visual intelligence to recognize gestures and capability to read text were integrated onto the robot.

Keywords: face detection and tracking, performance analysis, computer vision, machine learning, neural network, gesture recognition, optical character recognition

  Retrieve PDF document (JISE_202005_08.pdf)