I am currently working on my graduation project, 3D reconstruction on multiple images. I have divided the projects into two parts, one which restores the point cloud using structure-from-motion (I am learning code from here and one which generates surface from point cloud (which I am learning from this paper).

**My implementation of the first one could be seen here and the second one here.**

Both of them are… acceptable I guess? Here’s the working result from point cloud restoration:

And here’s the input image:

And I guess that’s not bad. The surface reconstruction part also works not that bad, if the input dataset isn’t from the first one, such as this bunny:

However, when I plug the first input into the second one, here’s what I get:

And I have absolutely no idea why is that. My current estimate is the lack of features, but I don’t really know the real reason, and I don’t know how to increase usable features, either. I am using ORB in OpenCV and increasing features just make the matching more messy and in turn makes the result point cloud more messy.

I have been stuck on this for quite a few days and I will be eternally grateful for people who helps. Thanks a bunch!

## Code

The codes are a little bit lengthy, so I am going to post the excerpt here:

### SfM part

```
Result SfM::run_sfm() {
if (images.empty()) {
LOG("There is no image to work on.");
return ERR;
}
// Initialize feature matcher
feature_util.init();
stereo_util.init();
// Initialize intrinsics
intrinsics.k = (cv::Mat_<float>(3, 3) << 2500, 0, images[0].cols / 2,
0, 2500, images[0].rows / 2,
0, 0, 1);
intrinsics.k_inv = intrinsics.k.inv();
intrinsics.distortion = cv::Mat_<float>::zeros(1, 4);
camera_poses.resize(images.size());
extract_features();
create_feature_match_matrix();
find_baseline_triangulation();
add_more_views_to_reconstruction();
return OK;
}
```

### The Surface Reconstruction part

```
auto Hoppe::run() -> bool {
if (pointcloud.points.size() == 0) {
HOPPE_LOG("ERR! Can't run without point cloud");
return false;
}
estimate_planes();
fix_orientations();
cube_march();
return true;
}
```

And here’s the `cube_march()`

:

```
auto Hoppe::cube_march() -> void {
cv::Vec3f bounding_box_min, bounding_box_max;
calculate_bounds(bounding_box_min, bounding_box_max);
auto size = bounding_box_max - bounding_box_min;
HOPPE_LOG("Bounding box size: %f %f %f", size(0), size(1), size(2));
parameters.density = density_estimation(size);
cv::Vec3i marching_size(ceilf(size(0) / parameters.density),
ceilf(size(1) / parameters.density),
ceilf(size(2) / parameters.density));
auto volume = 0;
do {
volume = marching_size(0) * marching_size(1) * marching_size(2);
if (volume > parameters.max_volume) {
parameters.density *= 2.0f;
marching_size = cv::Vec3i(ceilf(size(0) / parameters.density),
ceilf(size(1) / parameters.density),
ceilf(size(2) / parameters.density));
}
} while (volume > parameters.max_volume);
HOPPE_LOG("Marching cube size: %d %d %d", marching_size(0),
marching_size(1), marching_size(2));
marcher.init(marching_size, parameters.density);
HOPPE_LOG("Estimated: from %f %f %f to %f %f %f",
bounding_box_min(0), bounding_box_min(1), bounding_box_min(2),
bounding_box_max(0), bounding_box_max(1), bounding_box_max(2));
marcher.march([&] (cv::Point3f p) {
return sdf(p);
}, VEC2POINT(bounding_box_min));
}
```

I will gladly provide more info if you ask. Thanks!

Source: Windows Questions C++