Making Things See by Borenstein Greg

Making Things See by Borenstein Greg

Author:Borenstein, Greg [Greg Borenstein]
Language: eng
Format: epub, mobi
Tags: COMPUTERS / Computer Vision & Pattern Recognition
ISBN: 9781449327781
Publisher: O'Reilly Media
Published: 2012-01-12T16:00:00+00:00


Figure 4-6. The red dot shows up immediately after the completion of calibration, and our sketch begins receiving joint data. Having been converted from real-world to projective coordinates, the joint position vector matches up with the position of the user’s hand in the depth image.

This completes our walkthrough of this sketch. I’ve explained every line of it and every detail of the skeleton-tracking process. You’ve learned how to calibrate users so that their skeletons can be tracked and how to access the positions of their joints after they become available. You’ve seen how to convert that data so you can work with it in 2D coordinates that match the depth image with which you’re already thoroughly familiar. Some of these tasks, especially the calibration process, were somewhat complex and fiddly. But thankfully, they’ll remain the same for all of our sketches that access the skeleton data from here on out. You’ll never have to rewrite these calibration callbacks from scratch again; you can just copy and paste them from this sketch, though, as discussed, you might want to augment them so that they give the user additional feedback about the calibration process.

Shortly, we’ll move on to doing some more advanced things with the joint data: displaying the full skeleton and learning how to make measurements between various joints. But first we have a few loose ends to tie up here. There are a couple of lingering questions from this first skeleton sketch, just beneath the surface of our current code. We’ll explore these mysteries by putting together two variations of the current sketch.

The first question that arises when we look at our sketch in its current form is: what happened to the information about the distance of the user’s right hand? As it currently stands, our red dot tracks the user’s hand as it moves around left and right and up and down, but if the users moves his hand closer to the Kinect or farther away, the dot doesn’t respond at all. We certainly have information about the distance of the user’s hand. The PVector returned by getJointPositionSkeleton has x-, y-, and z-components. But right now, we’re just ignoring the z-component of this vector.

How can we use the z-coordinate of the joint position when we’re already projecting it onto a two-dimensional plane so that it corresponds with the depth image? We don’t want to literally move the red dot forward or backward in space. That would cause it to lose its registration with the depth image. Instead, we could represent the recession of the user’s hand in space using an old track from traditional perspective drawing: we can scale the size of the ellipse. The most basic principle of perspective is that objects that are closer appear larger than those that are farther away. We can recreate this effect without moving our ellipse out of the projective plane by simply making it larger as the user’s hand gets closer and smaller as it gets farther away.

You may already be thinking to yourself: this sounds like a job for map.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.