Just this past week I saw a post from the guys over at CCNY, showing off their new visual odometry and keyframe (offiline) mapping system. I decided to give it a try... and it worked out of the box no problem. Here's a video CCNY posted:
A few days later I came across a video (and code with ROS wrappers) from IntRoLab at Sherbrooke:
I also have noticed that RGBDSLAM is still getting updates and documentation revisions, so it is still getting active improvements.
I'm thinking it would be really cool to put together an application that turn these 3d maps into 2d maps usable by the ROS navigation stack. Robots like TurtleBot can typically localize against a map with just their Kinect-like sensor, but building maps has traditionally been tough due to the lack of features visible in a 57-degree wide "fake-laserscan".
This is really awesome stuff for the robotics community -- we have at least 3 different 3D mapping applications released into the ROS ecosphere. Are there more out there that anyone knows of? If you've used these apps, feel free to comment!
-Fergs
sir,i asked a Q in "http://answers.ros.org/question/59500/slam-implementation-with-kinect-and-the-current-shortcomings/" and you answered me well.But,i need to bother you more.....
ReplyDeletecould you tell me about some little scopes within these algorithms that i can look into any try out?
i need it so badly that i am very confused right now......
could you mention some current improvements that can be all right for my undergrad thesis work ?
Best thing to do, would be to look at the respective paper for each of these implementations, and check out the "future work" section.
ReplyDelete