Brief video of Maxwell driving, moving his head, and moving his arm. Now onto touching up the IK and getting the Kinect calibrated to the body.
-Fergs
Friday, February 11, 2011
Sunday, February 6, 2011
Introducing Maxwell

Maxwell is my latest attempt at a lowcost, human-scale mobile manipulator using an ArbotiX and ROS. The design guidelines were pretty straight forward: it needed an arm that could manipulate things on a table top, a Kinect for primary sensor on the head, and a mobile base that kept all that stuff upright. Additionally, I wanted the robot to be easy to transport and/or ship.
Maxwell sports a larger 16x16" version Armadillo base with motors that should support a 20lb payload at speeds up to 0.5m/s. Not shown in these images is a Hokuyo URG-04LX-UG01 which will be mounted on the base, just in front of the column. The head has two AX-12 servos for pan and tilt. Eventually, the head will include a shotgun microphone.




So what will Maxwell be up to? His first task is going to hopefully be competing in the AAAI Small Scale Manipulation Challenge. Once arm IK is functioning, and the kinect->arm transformation has been corrected, we'll be working on recognizing and moving chess pieces. After the AAAI event, he'll probably get an upgrade for a vertical lift on the arm (similar to the one on Georgia Tech's EL-E) and possibly a conversion to two more anthropomorphic arms. Speaking of EL-E, I couldn't help but replicate that classic pose:


And that's all for now...
-Fergs
Wednesday, February 2, 2011
Snow Day!
I've been a bit quiet on this blog lately -- it's been a busy start to the new semester. Today however, the University closed due to inclement weather. So, I'll post a few update on what I've been working on:
First, I just barely finished an entry in time for the ROS 3D contest. I had insisted on using the new OpenNI-based drivers, which had some issues under 32-bit systems until about a week before the contest deadline. This was also my first attempt using the ROS Point Cloud Library (PCL). My entry improves the ar_pose package to use depth data from the Kinect to improve the localization of the AR Markers. It also uses surface normal calculation from PCL to improve the orientation. While it's probably not the most technically interesting entry, I definitely did learn a bunch about PCL while working on it. You can read more about the entry here. Or watch the really horrible video (my normal camera stopped working about 12 hours before the deadline):
In creating the entry, I also quickly constructed another robot. This guy used an iRobot Create, Kinect, and a tripod to get the Kinect up to the "Standard Social Robot Minimum Height" that we've been applying to all our robots at Albany:

This tripod worked surprisingly well -- far better than anything I've previously built out of conduit.
Finally, I'm working on putting together a new robot based on an EX-106/RX-64 based arm, Kinect, and a bit larger mobile base..... pictures and video shortly. (his parts are stuck on UPS trucks somewhere in a snow storm).
-Fergs
First, I just barely finished an entry in time for the ROS 3D contest. I had insisted on using the new OpenNI-based drivers, which had some issues under 32-bit systems until about a week before the contest deadline. This was also my first attempt using the ROS Point Cloud Library (PCL). My entry improves the ar_pose package to use depth data from the Kinect to improve the localization of the AR Markers. It also uses surface normal calculation from PCL to improve the orientation. While it's probably not the most technically interesting entry, I definitely did learn a bunch about PCL while working on it. You can read more about the entry here. Or watch the really horrible video (my normal camera stopped working about 12 hours before the deadline):
In creating the entry, I also quickly constructed another robot. This guy used an iRobot Create, Kinect, and a tripod to get the Kinect up to the "Standard Social Robot Minimum Height" that we've been applying to all our robots at Albany:

This tripod worked surprisingly well -- far better than anything I've previously built out of conduit.
Finally, I'm working on putting together a new robot based on an EX-106/RX-64 based arm, Kinect, and a bit larger mobile base..... pictures and video shortly. (his parts are stuck on UPS trucks somewhere in a snow storm).
-Fergs
Friday, December 31, 2010
Neato + SLAM
Here is yet another story for "the power of open source."
I've been spending quite a bit of time working on SLAM with the Neato XV-11 using both the built in laser and the Hokuyo URG-04LX-UG01. I had pretty much given up on gmapping working with the Neato -- until earlier today we found an issue with the scan angle increment computation in gmapping not working with the Neato laser specifications. I probably wouldn't have found this bug had it not been for a user of the Trossen Robotic Community pointing out some issues he was having with gmapping, as my version still had some modifications from my PML work earlier this year.
Anyways, for anyone wanting to use gmapping with the Neato robot, you can apply the following patch:
339c339
- gsp_laser_angle_increment_ = (angle_max - angle_min)/scan.ranges.size();
+ gsp_laser_angle_increment_ = scan.angle_increment;
to slam_gmapping.cpp. This uses the angle_increment from the laser scan, rather than the computed one, which is incorrect for full rotation scans. This will avoid issues with the scan being improperly inverted, and issues with scan matching.
-Fergs
I've been spending quite a bit of time working on SLAM with the Neato XV-11 using both the built in laser and the Hokuyo URG-04LX-UG01. I had pretty much given up on gmapping working with the Neato -- until earlier today we found an issue with the scan angle increment computation in gmapping not working with the Neato laser specifications. I probably wouldn't have found this bug had it not been for a user of the Trossen Robotic Community pointing out some issues he was having with gmapping, as my version still had some modifications from my PML work earlier this year.
Anyways, for anyone wanting to use gmapping with the Neato robot, you can apply the following patch:
339c339
- gsp_laser_angle_increment_ = (angle_max - angle_min)/scan.ranges.size();
+ gsp_laser_angle_increment_ = scan.angle_increment;
to slam_gmapping.cpp. This uses the angle_increment from the laser scan, rather than the computed one, which is incorrect for full rotation scans. This will avoid issues with the scan being improperly inverted, and issues with scan matching.
-Fergs
Other SLAM Algorithms: CoreSLAM, Part 3
I've now got the build system working as well as fixing a number of parameter issues to be more ROS-compliant. Documentation for the package has been uploaded to http://www.ros.org/wiki/coreslam
Here's another example map created (of the first floor of a home):

Next up on the docket of winter break projects: some updates to the ArbotiX ROS package, and a number of people perception algorithms.
-Fergs
Here's another example map created (of the first floor of a home):

Next up on the docket of winter break projects: some updates to the ArbotiX ROS package, and a number of people perception algorithms.
-Fergs
Saturday, December 25, 2010
Other SLAM Algorithms: CoreSLAM, Part 2
I've now get the Monte Carlo Localization turned on, map publishing working in ROS, and a number of parameters defined. This required some hacking of the CoreSLAM library, in particular I removed all of the references to differential drive odometry, instead loading odometry externally from TF.
Here's an updated map using the 12-12-neato-ils bag file:

There's still some work to fix the way that the map->odom transform is handled, and allow a configurable map size and resolution (both of which will require reworking some more of the underlying library). I'm hoping to have the code released shortly.
-Fergs
Here's an updated map using the 12-12-neato-ils bag file:

There's still some work to fix the way that the map->odom transform is handled, and allow a configurable map size and resolution (both of which will require reworking some more of the underlying library). I'm hoping to have the code released shortly.
-Fergs
Friday, December 24, 2010
SLAM Data Sets
Over the past semester I've been spending a lot of time working with low-cost SLAM. In doing so, I've collected a number of datasets around the Albany campus. I've uploaded a number of them (as ROS bag files), along with sample maps, to my server: http://www.fergy.me/slam. All are free to use these datasets for whatever they please -- however, please post back maps and algorithms/parameters used to create them. Over time I would like to develop a set of best-known algorithms/parameters for low-cost SLAM.
These datasets were collected on iRobot Creates, the Neato XV-11, and the Vanadium Armadillo using the Neato Laser or the Hokuyo URG-04LX-UG01. In particular, I've recently collected a dataset I really am looking forward to working with: the 2010-12-23-double-laser.bag dataset, which consists of a long route around the ILS lab with a Neato XV-11. I mounted a second laser on the Neato for this run, a Hokuyo URG-04LX-UG01 sensor, which is aligned with the Neato laser from above:

And a picture of the Neato with second laser:

-Fergs
These datasets were collected on iRobot Creates, the Neato XV-11, and the Vanadium Armadillo using the Neato Laser or the Hokuyo URG-04LX-UG01. In particular, I've recently collected a dataset I really am looking forward to working with: the 2010-12-23-double-laser.bag dataset, which consists of a long route around the ILS lab with a Neato XV-11. I mounted a second laser on the Neato for this run, a Hokuyo URG-04LX-UG01 sensor, which is aligned with the Neato laser from above:

And a picture of the Neato with second laser:

-Fergs
Subscribe to:
Posts (Atom)