Using the code below, I am attempting to create 3d points from a depth image, though when the shader transforms the point via a projection and view matrix the points are at what I believe the 0,0 location in world coordinates.
When I attempt to have the shader use a model matrix with the projection and view matrixes, the points are not seen at all.
for(int i=0 ; i < image_height * image_width ; i++)
{
int r_i = i / (int)image_width;
int c_i = i % (int)image_width;
u_short* pixel = nullptr;
pixel = (u_short*)&depth_data[i * image_pixel_stride];
//~2-3 feet capture zone.
if(*pixel < 1000 || *pixel > 1500)
continue;
float x,y,z;
float d_i = (float)*pixel / (float)8191;
//calculate x-coordinate
float alpha_h = (M_PI - fovW) / 2;
float gamma_i_h = alpha_h + (float)c_i*(fovW / image_width);
x= ( d_i / tan(gamma_i_h));
//calculate y-coordinate
float alpha_v = 2 * M_PI - (fovH / 2);
float gamma_i_v = alpha_v + (float)r_i*(fovH / image_height);
y = ( d_i * tan(gamma_i_v)*-1);
//z-coordinate
vertices[vertexIndex++] = x;
vertices[vertexIndex++] = y;
vertices[vertexIndex++] = d_i;
}
To get the model matrix, I first used the arcamera’s pose’ 4×4 matrix and transformed the xyz above, then sent it to the shader for projection * view * 3dpoint.
The next thing I tried was using an anchor to get a matrix ~
ArAnchor* aa;
ArSession_acquireNewAnchor(ar_session, arCameraPose, &aa);
glm::mat4 model_mat(1.0f);
ArTrackingState tracking_state = AR_TRACKING_STATE_STOPPED;
ArAnchor_getTrackingState(ar_session, aa,
&tracking_state);
if (tracking_state == AR_TRACKING_STATE_TRACKING) {
// Render object only if the tracking state is AR_TRACKING_STATE_TRACKING.
util::GetTransformMatrixFromAnchor(*aa, ar_session,
&model_mat);...
Source: Windows Questions C++