When I create a tensor from float using PyTorch, then cast it back to a float, it produces a different result. Why is this, and how can I fix it to return the same value?
num = 0.9
float(torch.tensor(num))
Output:
0.8999999761581421
This is a floating-point "issue" and you read more about how Python 3 handles those here.
Essentially, not even num
is actually storing 0.9. Anyway, the print issue in your case comes from the fact that num
is actually double-precision and torch.tensor
uses single-precision by default. If you try:
num = 0.9
float(torch.tensor(num, dtype=torch.float64))
you'll get 0.9
.
이 기사는 인터넷에서 수집됩니다. 재 인쇄 할 때 출처를 알려주십시오.
침해가 발생한 경우 연락 주시기 바랍니다[email protected] 삭제
몇 마디 만하겠습니다