HIRE WRITER

Object Recognition and Localization for Mobile robot indoor Navigation

This is FREE sample
This text is free, available online and used for guidance and inspiration. Need a 100% unique paper? Order a custom essay.
  • Any subject
  • Within the deadline
  • Without paying in advance
Get custom essay

The Rapid evolution of technology and development of robotic application has made possible to create autonomous robots to assist or replace human in ordinary tasks in everyday setting. For instance service robots coexist with human in the same environment to provide support and assistance in house work and caring elderly or disabled people. Service robots usually work in unstructured environment and collaborate directly with humans, where industrial robots works in clearly structured environment with external safeguards.

These robots should be able to build an internal representation of the environment and localize itself, have a set of capabilities that allows them to interact and navigate in real environment and should understand commands from humans through various methods. Arguably the most important sense for moving and interacting with the world safely is vision. Robots must have an ability to process visual data in real time to adapt to the change to the environment. So for a robot vision the detection and recognition of indoor objects is one of important research topic in computer vision.

Even though there have been great advances in the past decade, this issue still remains one of the most challenging problem in computer vision when real life scenario is considered. It’s necessary to build reliable and fast classification system to enhance the performance of an indoor robot navigation. Various object classification system have been proposed over the last decade.

Based on the movement and location of the robot the object can appear rotated, scaled etc. so the features should be well established.

  1. Motivation
  2. Statement of the Problem
  3. Objectives
  4. Methodologies
  5. Related Work

Takács et al. Applied SURF Feature Detectors and Descriptor for recognition of objects such as chair, fire extinguishers, trach can, to support indoor navigation of robot (TurtleBot II). SURF features are extracted from the image and BOW is created from the features. For the classification of objects they used Support Vector Machine based on Bag of Words feature vectors and they did SURF Localization on the classification result. The extracted SUFT features are matched with stored ones to name and get the coordinates of the object in the current frame. They developed a software module for Taki (TurtleBot II) Robot on Ubuntu Operating platform and ROS framework. They achieved 85% accuracy. The system does not recognize multiple object at the same frame. If there is multiple object of interest the system will be confused.

Hernandez et al. developed a vision system for a mobile robot to detect objects (closet, chairs and screens). The proposed system contains three stages training stage, retrieval stage and classification stage. In the retrieval stage depth image and RGB image, through the subscription to ROS topics from the ASUS Xtion Pro Live is used to segment the object. From the segmented image, the feature is extracted and passed to the classification stage. Support vector Machine (SVM) is used as classification algorithm. For classification they explored two approaches, one against all, where 3 SVMs (one for each object) is used and one against one (by applying multi-class classification). The model accuracy is (81.58%) for closets, chairs (72.79%) and screens (65.60%) for the first approach and 81.97%, 76.56%, 60.13% for second approach respectively.

Yonglong Luo et al. proposed convolutional neural network based pipeline for indoor object detection. The work includes region of interest (ROI) extraction using selective search object proposal method. Every ROI is classified by pre-trained deep model and then detection fusion is applied to refine candidates between nearest frames. They used CafeNet as reference implementation with public Indoor dataset and private frames of video (FoV) dataset.

Cite this paper

Object Recognition and Localization for Mobile robot indoor Navigation. (2022, Jul 10). Retrieved from https://samploon.com/object-recognition-and-localization-for-mobile-robot-indoor-navigation/

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Hi!
Peter is on the line!

Don't settle for a cookie-cutter essay. Receive a tailored piece that meets your specific needs and requirements.

Check it out