(Libtorch) FP16 tensor cause C10error for torch::max() and torch::sqrt()

I got c10 error when I define a FP16 tensor and use torch::max() and torch::sqrt(). The code are shown as follows:

float test_float[3][3] = { {1.0, 2.0, 3.0}, {4.0, 5.0, 6.0 },{7.0, 8.0, 9.0} };
torch::Tensor test_float_tensor = torch::from_blob(test_float, { 3, 3 
}).to(at::kCPU).to(torch::kFloat16);
torch::sqrt(test_float_tensor);
torch::max(test_float_tensor, 1, true);

enter image description here
If I change the FP16 tensor into FP32 tensor, the error is gone.

float test_float[3][3] = { {1.0, 2.0, 3.0}, {4.0, 5.0, 6.0 },{7.0, 8.0, 9.0} };
torch::Tensor test_float_tensor = torch::from_blob(test_float, { 3, 3 
}).to(at::kCPU).to(torch::kFloat32);
torch::sqrt(test_float_tensor);
torch::max(test_float_tensor, 1, true);

Why would this happen? I have noticed that someone said CPU did not support FP16 tensor, however I find some function can be used for FP16 tensor in CPU such as torch::sum().

Source: Windows Questions C++

LEAVE A COMMENT