我在这里缺少一些东西:为什么CrossEntropyLoss
不能使用一维张量?
from torch import Tensor
X =Tensor([1.0,2.0,3.0])
labs = Tensor([2,2,3])
loss = nn.CrossEntropyLoss().forward(X,labs)
_stacklevel, dtype)
1315 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
1316 if dtype is None:
-> 1317 ret = input.log_softmax(dim)
1318 else:
1319 ret = input.log_softmax(dim, dtype=dtype)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
为什么会失败,应该进行哪些更改才能获得理想的结果?
如果您在这里看到文档!
Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.
因此,它期望输入为2D张量,目标为1D
import torch
from torch import Tensor
X =Tensor([[1.0,2.0,3.0]]) #2D
labs = torch.LongTensor([2]) # 0 <= targets[i] <= C-1
loss = nn.CrossEntropyLoss().forward(X,labs)
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句