|
visp_auto_tracker package from vision_visp repovision_visp visp_auto_tracker visp_bridge visp_camera_calibration visp_hand2eye_calibration visp_tracker |
Package Summary
| Tags | No category tags. |
| Version | 0.9.3 |
| License | GPLv2 |
| Build type | CATKIN |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/lagadic/vision_visp.git |
| VCS Type | git |
| VCS Version | melodic-devel |
| Last Updated | 2019-05-24 |
| Dev Status | MAINTAINED |
| Released | UNRELEASED |
Package Description
Additional Links
Maintainers
- Fabien Spindler
Authors
- Filip Novotny
visp_auto_tracker
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker.
This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
This package is composed of one node called 'visp_auto_tracker'. The node tries first to detect the QRcode or the Flash code associated to the object. Once the detection is performed, the node tracks the object. When a lost of tracking occurs the node tries to detect once again the object and then restart a tracking.
The viewer comming with visp_tracker package can be used to monitor the tracking result.
- Project webpage on ros.org: tutorial and API reference
- Project webpage: source code download, bug report
Setup
This package contains submodules. It can be compiled like any other ROS package using catkin_make.
Prerequisities
visp_auto_tracker depends on visp_bridge and visp_tracker packages available from https://github.com/lagadic/vision_visp (indigo-devel branches).
visp_auto_tracker depends also on libdmtx-dev and libzbar-dev system dependencies. To install them run:
$ sudo apt-get install libdmtx-dev libzbar-dev
How to get and build visp_tracker
Supposed you have a catkin work space just run:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/lagadic/vision_visp.git
$ cd ..
$ catkin_make --pkg visp_auto_tracker
Documentation
The documentation is available on the project ROS homepage.
For more information, refer to the ROS tutorial.
Could not convert RST to MD: No such file or directory - pandoc
Wiki Tutorials
Source Tutorials
Package Dependencies
| Deps | Name | |
|---|---|---|
| 2 | geometry_msgs | |
| 3 | message_filters | |
| 2 | resource_retriever | |
| 2 | roscpp | |
| 2 | sensor_msgs | |
| 2 | std_msgs | |
| 0 | visp | |
| 1 | visp_bridge | |
| 1 | visp_tracker | |
| 1 | catkin |
System Dependencies
| Name |
|---|
| libdmtx-dev |
| zbar |
Dependant Packages
| Name | Repo | Deps |
|---|---|---|
| vision_visp | github-lagadic-vision_visp |
Launch files
- launch/tracklive_usb.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with an usb camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are set as rosparam parameters. They need to be changed to be the one of your camera. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tutorial.launch
- -*- xml -*- This tutorial relies on a recorded video sequence where the camera is fixed in front of QR code planar target. The model corresponding to this target is given into the models directory of this package. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tracklive_firewire.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with a firewire camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are read from models/calibration.ini file. They need to be the one of your camera. Here we use the viewer coming with visp_tracker package to display the pose estimation results. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
Messages
Services
Plugins
Recent questions tagged visp_auto_tracker at answers.ros.org
|
visp_auto_tracker package from vision_visp repovision_visp visp_auto_tracker visp_bridge visp_camera_calibration visp_hand2eye_calibration visp_tracker |
Package Summary
| Tags | No category tags. |
| Version | 0.9.3 |
| License | GPLv2 |
| Build type | CATKIN |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/lagadic/vision_visp.git |
| VCS Type | git |
| VCS Version | lunar-devel |
| Last Updated | 2017-10-31 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
Package Description
Additional Links
Maintainers
- Fabien Spindler
Authors
- Filip Novotny
visp_auto_tracker
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker.
This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
This package is composed of one node called 'visp_auto_tracker'. The node tries first to detect the QRcode or the Flash code associated to the object. Once the detection is performed, the node tracks the object. When a lost of tracking occurs the node tries to detect once again the object and then restart a tracking.
The viewer comming with visp_tracker package can be used to monitor the tracking result.
- Project webpage on ros.org: tutorial and API reference
- Project webpage: source code download, bug report
Setup
This package contains submodules. It can be compiled like any other ROS package using catkin_make.
Prerequisities
visp_auto_tracker depends on visp_bridge and visp_tracker packages available from https://github.com/lagadic/vision_visp (indigo-devel branches).
visp_auto_tracker depends also on libdmtx-dev and libzbar-dev system dependencies. To install them run:
$ sudo apt-get install libdmtx-dev libzbar-dev
How to get and build visp_tracker
Supposed you have a catkin work space just run:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/lagadic/vision_visp.git
$ cd ..
$ catkin_make --pkg visp_auto_tracker
Documentation
The documentation is available on the project ROS homepage.
For more information, refer to the ROS tutorial.
Could not convert RST to MD: No such file or directory - pandoc
Wiki Tutorials
Source Tutorials
Package Dependencies
| Deps | Name | |
|---|---|---|
| 2 | geometry_msgs | |
| 3 | message_filters | |
| 2 | resource_retriever | |
| 2 | roscpp | |
| 2 | sensor_msgs | |
| 2 | std_msgs | |
| 0 | visp | |
| 1 | visp_bridge | |
| 1 | visp_tracker | |
| 1 | catkin |
System Dependencies
| Name |
|---|
| libdmtx-dev |
| zbar |
Dependant Packages
| Name | Repo | Deps |
|---|---|---|
| vision_visp | github-lagadic-vision_visp |
Launch files
- launch/tracklive_usb.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with an usb camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are set as rosparam parameters. They need to be changed to be the one of your camera. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tutorial.launch
- -*- xml -*- This tutorial relies on a recorded video sequence where the camera is fixed in front of QR code planar target. The model corresponding to this target is given into the models directory of this package. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tracklive_firewire.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with a firewire camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are read from models/calibration.ini file. They need to be the one of your camera. Here we use the viewer coming with visp_tracker package to display the pose estimation results. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
Messages
Services
Plugins
Recent questions tagged visp_auto_tracker at answers.ros.org
|
visp_auto_tracker package from vision_visp repovision_visp visp_auto_tracker visp_bridge visp_camera_calibration visp_hand2eye_calibration visp_tracker |
Package Summary
| Tags | No category tags. |
| Version | 0.9.3 |
| License | GPLv2 |
| Build type | CATKIN |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/lagadic/vision_visp.git |
| VCS Type | git |
| VCS Version | kinetic-devel |
| Last Updated | 2017-02-17 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
Package Description
Additional Links
Maintainers
- Fabien Spindler
Authors
- Filip Novotny
visp_auto_tracker
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker.
This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
This package is composed of one node called 'visp_auto_tracker'. The node tries first to detect the QRcode or the Flash code associated to the object. Once the detection is performed, the node tracks the object. When a lost of tracking occurs the node tries to detect once again the object and then restart a tracking.
The viewer comming with visp_tracker package can be used to monitor the tracking result.
- Project webpage on ros.org: tutorial and API reference
- Project webpage: source code download, bug report
Setup
This package contains submodules. It can be compiled like any other ROS package using catkin_make.
Prerequisities
visp_auto_tracker depends on visp_bridge and visp_tracker packages available from https://github.com/lagadic/vision_visp (indigo-devel branches).
visp_auto_tracker depends also on libdmtx-dev and libzbar-dev system dependencies. To install them run:
$ sudo apt-get install libdmtx-dev libzbar-dev
How to get and build visp_tracker
Supposed you have a catkin work space just run:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/lagadic/vision_visp.git
$ cd ..
$ catkin_make --pkg visp_auto_tracker
Documentation
The documentation is available on the project ROS homepage.
For more information, refer to the ROS tutorial.
Could not convert RST to MD: No such file or directory - pandoc
Wiki Tutorials
Source Tutorials
Package Dependencies
| Deps | Name | |
|---|---|---|
| 2 | geometry_msgs | |
| 3 | message_filters | |
| 2 | resource_retriever | |
| 2 | roscpp | |
| 2 | sensor_msgs | |
| 2 | std_msgs | |
| 0 | visp | |
| 1 | visp_bridge | |
| 1 | visp_tracker | |
| 1 | catkin |
System Dependencies
| Name |
|---|
| libdmtx-dev |
| zbar |
Dependant Packages
Launch files
- launch/tracklive_usb.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with an usb camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are set as rosparam parameters. They need to be changed to be the one of your camera. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tutorial.launch
- -*- xml -*- This tutorial relies on a recorded video sequence where the camera is fixed in front of QR code planar target. The model corresponding to this target is given into the models directory of this package. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tracklive_firewire.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with a firewire camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are read from models/calibration.ini file. They need to be the one of your camera. Here we use the viewer coming with visp_tracker package to display the pose estimation results. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
Messages
Services
Plugins
Recent questions tagged visp_auto_tracker at answers.ros.org
|
visp_auto_tracker package from vision_visp repovision_visp visp_auto_tracker visp_bridge visp_camera_calibration visp_hand2eye_calibration visp_tracker |
Package Summary
| Tags | No category tags. |
| Version | 0.9.1 |
| License | GPLv2 |
| Build type | CATKIN |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/lagadic/vision_visp.git |
| VCS Type | git |
| VCS Version | indigo-devel |
| Last Updated | 2017-02-17 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
Package Description
Additional Links
Maintainers
- Fabien Spindler
Authors
- Filip Novotny
visp_auto_tracker
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker.
This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
This package is composed of one node called 'visp_auto_tracker'. The node tries first to detect the QRcode or the Flash code associated to the object. Once the detection is performed, the node tracks the object. When a lost of tracking occurs the node tries to detect once again the object and then restart a tracking.
The viewer comming with visp_tracker package can be used to monitor the tracking result.
- Project webpage on ros.org: tutorial and API reference
- Project webpage: source code download, bug report
Setup
This package contains submodules. It can be compiled like any other ROS package using catkin_make.
Prerequisities
visp_auto_tracker depends on visp_bridge and visp_tracker packages available from https://github.com/lagadic/vision_visp (indigo-devel branches).
visp_auto_tracker depends also on libdmtx-dev and libzbar-dev system dependencies. To install them run:
$ sudo apt-get install libdmtx-dev libzbar-dev
How to get and build visp_tracker
Supposed you have a catkin work space just run:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/lagadic/vision_visp.git
$ cd ..
$ catkin_make --pkg visp_auto_tracker
Documentation
The documentation is available on the project ROS homepage.
For more information, refer to the ROS tutorial.
Could not convert RST to MD: No such file or directory - pandoc
Wiki Tutorials
Source Tutorials
Package Dependencies
| Deps | Name | |
|---|---|---|
| 2 | geometry_msgs | |
| 3 | message_filters | |
| 2 | resource_retriever | |
| 2 | roscpp | |
| 2 | sensor_msgs | |
| 2 | std_msgs | |
| 0 | visp | |
| 1 | visp_bridge | |
| 1 | visp_tracker | |
| 1 | catkin |
System Dependencies
| Name |
|---|
| libdmtx-dev |
| zbar |
Dependant Packages
Launch files
- launch/tracklive_usb.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with an usb camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are set as rosparam parameters. They need to be changed to be the one of your camera. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tutorial.launch
- -*- xml -*- This tutorial relies on a recorded video sequence where the camera is fixed in front of QR code planar target. The model corresponding to this target is given into the models directory of this package. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tracklive_firewire.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with a firewire camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are read from models/calibration.ini file. They need to be the one of your camera. Here we use the viewer coming with visp_tracker package to display the pose estimation results. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
Messages
Services
Plugins
Recent questions tagged visp_auto_tracker at answers.ros.org
|
visp_auto_tracker package from vision_visp repovision_visp visp_auto_tracker visp_bridge visp_camera_calibration visp_hand2eye_calibration visp_tracker |
Package Summary
| Tags | No category tags. |
| Version | 0.9.1 |
| License | GPLv2 |
| Build type | CATKIN |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/lagadic/vision_visp.git |
| VCS Type | git |
| VCS Version | jade-devel |
| Last Updated | 2017-02-17 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
Package Description
Additional Links
Maintainers
- Fabien Spindler
Authors
- Filip Novotny
visp_auto_tracker
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker.
This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
This package is composed of one node called 'visp_auto_tracker'. The node tries first to detect the QRcode or the Flash code associated to the object. Once the detection is performed, the node tracks the object. When a lost of tracking occurs the node tries to detect once again the object and then restart a tracking.
The viewer comming with visp_tracker package can be used to monitor the tracking result.
- Project webpage on ros.org: tutorial and API reference
- Project webpage: source code download, bug report
Setup
This package contains submodules. It can be compiled like any other ROS package using catkin_make.
Prerequisities
visp_auto_tracker depends on visp_bridge and visp_tracker packages available from https://github.com/lagadic/vision_visp (indigo-devel branches).
visp_auto_tracker depends also on libdmtx-dev and libzbar-dev system dependencies. To install them run:
$ sudo apt-get install libdmtx-dev libzbar-dev
How to get and build visp_tracker
Supposed you have a catkin work space just run:
$ cd ~/catkin_ws/src
$ git clone -b indigo-devel https://github.com/lagadic/vision_visp.git
$ cd ..
$ catkin_make --pkg visp_auto_tracker
Documentation
The documentation is available on the project ROS homepage.
For more information, refer to the ROS tutorial.
Could not convert RST to MD: No such file or directory - pandoc
Wiki Tutorials
Source Tutorials
Package Dependencies
| Deps | Name | |
|---|---|---|
| 2 | geometry_msgs | |
| 3 | message_filters | |
| 2 | resource_retriever | |
| 2 | roscpp | |
| 2 | sensor_msgs | |
| 2 | std_msgs | |
| 0 | visp | |
| 1 | visp_bridge | |
| 1 | visp_tracker | |
| 1 | catkin |
System Dependencies
| Name |
|---|
| libdmtx-dev |
| zbar |
Dependant Packages
Launch files
- launch/tracklive_usb.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with an usb camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are set as rosparam parameters. They need to be changed to be the one of your camera. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tutorial.launch
- -*- xml -*- This tutorial relies on a recorded video sequence where the camera is fixed in front of QR code planar target. The model corresponding to this target is given into the models directory of this package. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tracklive_firewire.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with a firewire camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are read from models/calibration.ini file. They need to be the one of your camera. Here we use the viewer coming with visp_tracker package to display the pose estimation results. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
Messages
Services
Plugins
Recent questions tagged visp_auto_tracker at answers.ros.org
|
visp_auto_tracker package from vision_visp repovision_visp visp_auto_tracker visp_bridge visp_camera_calibration visp_hand2eye_calibration visp_tracker |
Package Summary
| Tags | No category tags. |
| Version | 0.7.3 |
| License | GPLv2 |
| Build type | CATKIN |
| Use | RECOMMENDED |
Repository Summary
| Checkout URI | https://github.com/lagadic/vision_visp.git |
| VCS Type | git |
| VCS Version | hydro-devel |
| Last Updated | 2015-11-03 |
| Dev Status | MAINTAINED |
| Released | RELEASED |
Package Description
Additional Links
Maintainers
- Fabien Spindler
Authors
- Filip Novotny
visp_auto_tracker
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker.
This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
This package is composed of one node called 'visp_auto_tracker'. The node tries first to detect the QRcode or the Flash code associated to the object. Once the detection is performed, the node tracks the object. When a lost of tracking occurs the node tries to detect once again the object and then restart a tracking.
The viewer comming with visp_tracker package can be used to monitor the tracking result.
- Project webpage on ros.org: tutorial and API reference
- Project webpage: source code download, bug report
Setup
This package contains submodules. It can be compiled like any other ROS package using catkin_make.
Prerequisities
visp_auto_tracker depends on visp_bridge and visp_tracker packages available from https://github.com/lagadic/vision_visp (hydro-devel branches).
visp_auto_tracker depends also on libdmtx-dev and libzbar-dev system dependencies. To install them run:
$ sudo apt-get install libdmtx-dev libzbar-dev
How to get and build visp_tracker
Supposed you have a catkin work space just run:
$ cd ~/catkin_ws/src
$ git clone -b hydro-devel https://github.com/lagadic/vision_visp.git
$ cd ..
$ catkin_make --pkg visp_auto_tracker
Documentation
The documentation is available on the project ROS homepage.
For more information, refer to the ROS tutorial.
Could not convert RST to MD: No such file or directory - pandoc
Wiki Tutorials
Source Tutorials
Package Dependencies
| Deps | Name | |
|---|---|---|
| 2 | geometry_msgs | |
| 3 | message_filters | |
| 2 | resource_retriever | |
| 2 | roscpp | |
| 2 | sensor_msgs | |
| 2 | std_msgs | |
| 0 | visp | |
| 1 | visp_bridge | |
| 1 | visp_tracker | |
| 1 | catkin |
System Dependencies
| Name |
|---|
| libdmtx-dev |
| zbar |
Dependant Packages
Launch files
- launch/tracklive_usb.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with an usb camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are set as rosparam parameters. They need to be changed to be the one of your camera. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tutorial.launch
- -*- xml -*- This tutorial relies on a recorded video sequence where the camera is fixed in front of QR code planar target. The model corresponding to this target is given into the models directory of this package. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-
- launch/tracklive_firewire.launch
- -*- xml -*- This tutorial relies on a live video sequence acquired with a firewire camera in front of a QR code planar target. The model corresponding to this target is given into the models directory of this package. Camera parameters are read from models/calibration.ini file. They need to be the one of your camera. Here we use the viewer coming with visp_tracker package to display the pose estimation results. See http://www.ros.org/wiki/visp_auto_tracker for more information.
-