我想使用自己的binary_crossentropy,而不是使用Keras库随附的库。这是我的自定义函数:
import theano
from keras import backend as K
def elementwise_multiply(a, b): # a and b are tensors
c = a * b
return theano.function([a, b], c)
def custom_objective(y_true, y_pred):
first_log = K.log(y_pred)
first_log = elementwise_multiply(first_log, y_true)
second_log = K.log(1 - y_pred)
second_log = elementwise_multiply(second_log, (1 - y_true))
result = second_log + first_log
return K.mean(result, axis=-1)
注意:这是为了练习。我知道T.nnet.binary_crossentropy(y_pred,y_true)
但是,当我编译模型时:
sgd = SGD(lr=0.001)
model.compile(loss = custom_objective, optimizer = sgd)
我收到此错误:
-------------------------------------------------- -------------------------()中的TypeError追溯(最近一次通话最近)36 37 sgd = SGD(lr = 0.001)---> 38 model.compile(损失= custom_objective,优化程序= sgd)39#==================================== ===========
C:\ Program Files(x86)\ Anaconda3 \ lib \ site-packages \ keras \ models.py在compile中(self,Optimizer,loss,class_mode)418 else:419 mask = None-> 420 train_loss = weighted_loss(self。 y,self.y_train,self.weights,mask)421 test_loss = weighted_loss(self.y,self.y_test,self.weights,mask)422
C:\ Program Files(x86)\ Anaconda3 \ lib \ site-packages \ keras \ models.py in weighted(y_true,y_pred,weights,mask)80'''81#score_array的ndim> = 2 ---> 82 score_array = fn(y_true,y_pred)83如果mask不是None:84#mask应该具有与score_array相同的形状
在custom_objective(y_true,y_pred)中11 second_log = K.log(1-K.clip(y_true,K.epsilon(),np.inf))12 second_log = elementwise_multiply(second_log,(1-y_true))---> 13结果= second_log + first_log 14#结果= np.multiply(结果,y_pred)15返回K.mean(结果,轴= -1)
TypeError:+不支持的操作数类型:“函数”和“函数”
当我用内联函数替换elementwise_multiply时:
def custom_objective(y_true, y_pred):
first_log = K.log(y_pred)
first_log = first_log * y_true
second_log = K.log(1 - y_pred)
second_log = second_log * (1-y_true)
result = second_log + first_log
return K.mean(result, axis=-1)
该模型可以编译,但损失值为nan:
时代1/1 945/945 [==============================-62秒-损失:南-比赛:0.0011- val_loss:nan-val_acc:0.0000e + 00
有人可以帮我吗?
谢谢
我发现了问题。我必须将返回值乘以“ -1”,因为我使用随机梯度递归(sgd)作为优化器,而不是随机梯度上升!
这是代码,它像一个魅力一样工作:
import theano
from keras import backend as K
def custom_objective(y_true, y_pred):
first_log = K.log(y_pred)
first_log = first_log * y_true
second_log = K.log(1 - y_pred)
second_log = second_log * (1 - y_true)
result = second_log + first_log
return (-1 * K.mean(result))
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句