Note that there are some explanatory texts on larger screens.

plurals
  1. POKinect Point Cloud Color Misplacement
    primarykey
    data
    text
    <p>I am working on a Kinect Point Cloud Application vi kinect SDK 1.5. So far, I am able to put the kinect depth to real world coordinates and gain the point cloud. In order to achieve that I calibrated the kinect depth sensor camera to 2d rgb frame by using Matthew Fisher's code which is shared on the following link (http:/ /graphics.stanford.edu/~mdfisher/Kinect.html). I have modified the code for kinect sdk on C#. (depth returns in mm):</p> <pre><code> Hashtable preparePointCloud(short[] rawDepthData) {//simple holds the 3d point of 2d rgb pixels Y*640+X coordinates //for instance: 3D point location of x:100 y:50 (in 2d coordinates) is restored in key: 640*50 + 100 in hashtable Hashtable VectorList = new Hashtable(); int x = 0; int y = 0; //rawdepthData holds y*640+x coordinates depth in mm that comes from depth frame. So by only looking the the key of hashtable color of the 3D point easly can be found. for (int i = 0; i &lt; rawDepthData.Length; i++) { int z = rawDepthData[i]; //int z = rawDepthData[y*640+x]; if (z &gt; 1)//don't add the unknown values "-1" { Vector3 temp = DepthToWorld(x_, y_, z); Vector2 color = WorldToColor(ref temp); int colorindex = (int)((color.Y) * 640 + (color.X)); if (VectorList[colorindex] == null) { VectorList.Add(colorindex, temp); } } x++; if (x == 640) { x = 0; y++; if (y == 480) break; } } return VectorList; } Vector3 DepthToWorld(int x, int y, int depthValue) { Vector3 result; double depth = (double)(depthValue)/1000; //mm to meter result.X = (float)((x - cx_d) * depth * fx_d); result.Y = (float)((y - cy_d) * depth * fy_d); result.Z = (float)depth; return result; } Vector2 WorldToColor(ref Vector3 pt) { OpenTK.Vector4 transformedPos_ = OpenTK.Vector4.Transform(new OpenTK.Vector4(pt,1.0f), depthCalibrationTransformationMatrix); Vector3 transformedPos = transformedPos_.Xyz; float invZ = 1.0f / transformedPos.Z; Vector2 result = new Vector2(); double xValue = ((transformedPos.X * fx_rgb * invZ) + cx_rgb); result.X = (float)Math.Round(xValue, 0, System.MidpointRounding.AwayFromZero); double yValue = ((transformedPos.Y * fy_rgb * invZ) + cy_rgb); result.Y = (float)Math.Round(yValue, 0, System.MidpointRounding.AwayFromZero); colorrecord.Add(result); return result; } </code></pre> <p>Also finding point cloud preparation progress is below:</p> <pre><code> foreach (int rgbCoor in previousDepthListWithKeysTheseAreEqualTo_Y_Times640Plus_X.Keys) { PointCloud.rigidPointCloudLocationList.Add((Vector3) previousDepthListWithKeysTheseAreEqualTo_Y_Times640Plus_X[rgbCoor]); //add the point coorinatis to the point cloud //obtain the color Vector3 point = (Vector3)previousDepthListWithKeysTheseAreEqualTo_Y_Times640Plus_X[rgbCoor]; Vector2 coor = WorldToColor(ref point);//in order to check Am I looking the correct coordinates int indexCoor = 640 * (int)coor.Y + (int)coor.X; Color color_ = Color.FromArgb(previousRawColorData[rgbCoor*4], previousRawColorData[rgbCoor*4 + 1], previousRawColorData[rgbCoor*4 + 2]); PointCloud.rigidPointCloudColorList.Add(color_.ToArgb()); //now add the color And the color and 3d locaiton info will be on the same List index (double checked) } </code></pre> <p>To summurize I have problem with that calibration progress. Colors and point cloud do not match. There are developers who use the Fisher's method for kinect calibration without problem. Or I am missing something. Some screeenshots from my experiments listed below:</p> <p><a href="http://i49.tinypic.com/2nhghau.png" rel="nofollow">2D rgb picture</a></p> <p><a href="http://i45.tinypic.com/kb5b39.png" rel="nofollow">Point cloud view</a></p> <p>Is that a similar problem to you? Because I had no lead. </p> <p>Lastly I want to ask a little question about point cloud view. The point cloud shows the points like grid surface (See last picture on the above). Is that normal? </p> <p>Thanks in advance.</p> <p>Update: Does any one know what does this code seqment?:</p> <p>result.x = Utility::Bound(Math::Round((transformedPos.x * fx_rgb * invZ) + cx_rgb), 0, 639);</p> <p>result.y = Utility::Bound(Math::Round((transformedPos.y * fy_rgb * invZ) + cy_rgb), 0, 479);</p> <p>I could not find any explanation about what does the "Utility::Bound()"?</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload