Tensorflow收敛但预测错误

Kendall Weihe

前几天,我在这里发布了类似的问题,但此后,我对发现的错误进行了编辑,但错误预测的问题仍然存在。

我有两个网络-一个网络具有3个转换层,另一个网络具有3个转换层,然后是3个转换层。两者都拍摄200x200的输入图像。输出的分辨率相同,为200x200,但具有两个分类(1的零-这是一个细分网络),因此网络预测的尺寸为200x200x2(加上batch_size)。让我们谈谈具有deconv层的网络。

这很奇怪...在10次训练中,也许其中3个会收敛。其他7则发散到0.0的精度。

转换和反转换层由ReLu激活。优化器做一些奇怪的事情。当我在每次训练迭代之后打印预测时,值的大小会开始变大-考虑到它们都已通过ReLu传递,这是正确的-但在每次迭代后,值都会变小,直到它们大致介于0和2之间。将它们传递给S型函数(sigmoid_cross_entropy_wight_logits),从而将大的负值压缩为0,将大的正值压缩为1。当我进行预测时,我通过再次使输出通过S型函数来重新激活输出。

因此,在第一次迭代之后,预测值是合理的...

Accuracy = 0.508033
[[[[ 1.  0.]
   [ 0.  1.]
   [ 0.  0.]
   ..., 
   [ 1.  0.]
   [ 1.  1.]
   [ 1.  0.]]

  [[ 0.  1.]
   [ 1.  1.]
   [ 0.  0.]
   ..., 
   [ 1.  1.]
   [ 1.  1.]
   [ 0.  1.]]

但是然后经过一些迭代,然后说这次实际上收敛了,预测值看起来就像...(因为优化器使输出变小,它们全部位于S型函数的怪异中间)

  [[ 0.51028508  0.63202268]
   [ 0.24386917  0.52015287]
   [ 0.62086064  0.6953823 ]
   ..., 
   [ 0.2593964   0.13163178]
   [ 0.24617286  0.5210492 ]
   [ 0.24692698  0.5876413 ]]]]
Accuracy = 0.999913

我的优化器功能有误吗?

这是完整的代码...跳转以def conv_net查看网络的创建...在此函数之后是cost函数,优化器和准确性的定义。您会注意到,当我测量准确性并做出预测时,会使用以下方式重新激活输出tf.nn.sigmoid(pred):这是因为成本函数sigmoid_cross_entropy_with_logits将激活和损失合并在同一函数中。换句话说,pred(网络)输出线性值。

import tensorflow as tf
import pdb
import numpy as np
from numpy import genfromtxt
from PIL import Image

# Parameters
learning_rate = 0.001
training_iters = 10000
batch_size = 10
display_step = 1

# Network Parameters
n_input = 200 # MNIST data input (img shape: 28*28)
n_output = 40000
n_classes = 2 # MNIST total classes (0-9 digits)
#n_input = 200

dropout = 0.75 # Dropout, probability to keep units

# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input, n_input])
y = tf.placeholder(tf.float32, [None, n_input, n_input, n_classes])
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)


def convert_to_2_channel(x, batch_size):
    #assume input has dimension (batch_size,x,y)
    #output will have dimension (batch_size,x,y,2)
    output = np.empty((batch_size, 200, 200, 2))

    temp_arr1 = np.empty((batch_size, 200, 200))
    temp_arr2 = np.empty((batch_size, 200, 200))

    for i in xrange(batch_size):
        for j in xrange(3):
            for k in xrange(3):
                if x[i][j][k] == 1:
                    temp_arr1[i][j][k] = 1
                    temp_arr2[i][j][k] = 0
                else:
                    temp_arr1[i][j][k] = 0
                    temp_arr2[i][j][k] = 1

    for i in xrange(batch_size):
        for j in xrange(200):
            for k in xrange(200):
                for l in xrange(2):
                    if l == 0:
                        output[i][j][k][l] = temp_arr1[i][j][k]
                    else:
                        output[i][j][k][l] = temp_arr2[i][j][k]

    return output


# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
    # Conv2D wrapper, with bias and relu activation
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)

def maxpool2d(x, k=2):
    # MaxPool2D wrapper
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                          padding='SAME')


# Create model
def conv_net(x, weights, biases, dropout):
    # Reshape input picture
    x = tf.reshape(x, shape=[-1, 200, 200, 1])

    # Convolution Layer
    conv1 = conv2d(x, weights['wc1'], biases['bc1'])
    # Max Pooling (down-sampling)
    #conv1 = tf.nn.local_response_normalization(conv1)
    conv1 = maxpool2d(conv1, k=2)

    # Convolution Layer
    conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
    # Max Pooling (down-sampling)
    #conv2 = tf.nn.local_response_normalization(conv2)
    conv2 = maxpool2d(conv2, k=2)

    # Convolution Layer
    conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
    # # Max Pooling (down-sampling)
    #conv3 = tf.nn.local_response_normalization(conv3)
    conv3 = maxpool2d(conv3, k=2)

    temp_batch_size = tf.shape(x)[0]
    output_shape = [temp_batch_size, 50, 50, 64]
    conv4 = tf.nn.conv2d_transpose(conv3, weights['wdc1'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
    conv4 = tf.nn.bias_add(conv4, biases['bdc1'])
    conv4 = tf.nn.relu(conv4)
    # conv4 = tf.nn.local_response_normalization(conv4)

    # output_shape = tf.pack([temp_batch_size, 100, 100, 32])
    output_shape = [temp_batch_size, 100, 100, 32]
    conv5 = tf.nn.conv2d_transpose(conv4, weights['wdc2'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
    conv5 = tf.nn.bias_add(conv5, biases['bdc2'])
    conv5 = tf.nn.relu(conv5)
    # conv5 = tf.nn.local_response_normalization(conv5)

    # output_shape = tf.pack([temp_batch_size, 200, 200, 1])
    output_shape = [temp_batch_size, 200, 200, 2]
    conv6 = tf.nn.conv2d_transpose(conv5, weights['wdc3'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
    conv6 = tf.nn.bias_add(conv6, biases['bdc3'])
    conv6 = tf.nn.relu(conv6)
    # pdb.set_trace()

    # Fully connected layer
    # Reshape conv2 output to fit fully connected layer input
    fc1 = tf.reshape(conv6, [-1, weights['wd1'].get_shape().as_list()[0]])
    fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
    fc1 = tf.nn.relu(fc1)
    # Apply Dropout
    fc1 = tf.nn.dropout(fc1, dropout)

    return (tf.add(tf.matmul(fc1, weights['out']), biases['out']))# Store layers weight & bias

weights = {
    # 5x5 conv, 1 input, 32 outputs
    'wc1' : tf.Variable(tf.random_normal([5, 5, 1, 32])),
    # 5x5 conv, 32 inputs, 64 outputs
    'wc2' : tf.Variable(tf.random_normal([5, 5, 32, 64])),
    # 5x5 conv, 32 inputs, 64 outputs
    'wc3' : tf.Variable(tf.random_normal([5, 5, 64, 128])),

    'wdc1' : tf.Variable(tf.random_normal([2, 2, 64, 128])),

    'wdc2' : tf.Variable(tf.random_normal([2, 2, 32, 64])),

    'wdc3' : tf.Variable(tf.random_normal([2, 2, 2, 32])),

    # fully connected, 7*7*64 inputs, 1024 outputs
    'wd1': tf.Variable(tf.random_normal([80000, 1024])),
    # 1024 inputs, 10 outputs (class prediction)
    'out': tf.Variable(tf.random_normal([1024, 80000]))
}

biases = {
    'bc1': tf.Variable(tf.random_normal([32])),
    'bc2': tf.Variable(tf.random_normal([64])),
    'bc3': tf.Variable(tf.random_normal([128])),
    'bdc1': tf.Variable(tf.random_normal([64])),
    'bdc2': tf.Variable(tf.random_normal([32])),
    'bdc3': tf.Variable(tf.random_normal([2])),
    'bd1': tf.Variable(tf.random_normal([1024])),
    'out': tf.Variable(tf.random_normal([80000]))
}

# Construct model
pred = conv_net(x, weights, biases, keep_prob)
pred = tf.reshape(pred, [-1,n_input,n_input,n_classes])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
# cost = (tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# Evaluate model
correct_pred = tf.equal(0,tf.cast(tf.sub(tf.nn.sigmoid(pred),y), tf.int32))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.initialize_all_variables()
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    summary = tf.train.SummaryWriter('/tmp/logdir/', sess.graph)
    step = 1
    from tensorflow.contrib.learn.python.learn.datasets.scroll import scroll_data
    data = scroll_data.read_data('/home/kendall/Desktop/')
    # Keep training until reach max iterations
    while step * batch_size < training_iters:
        batch_x, batch_y = data.train.next_batch(batch_size)
        # Run optimization op (backprop)
        batch_x = batch_x.reshape((batch_size, n_input, n_input))
        batch_y = batch_y.reshape((batch_size, n_input, n_input))
        batch_y = convert_to_2_channel(batch_y, batch_size) #converts the 200x200 ground truth to a 200x200x2 classification
        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
                                       keep_prob: dropout})
        #measure prediction
        prediction = sess.run(tf.nn.sigmoid(pred), feed_dict={x: batch_x, keep_prob: 1.})
        print prediction
        if step % display_step == 0:
            # Calculate batch loss and accuracdef conv_net(x, weights, biases, dropout):
            save_path = "model.ckpt"
            saver.save(sess, save_path)
            loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                              y: batch_y,
                                                              keep_prob: dropout})
            print "Accuracy = " + str(acc)
            if acc > 0.73:
                break
        step += 1
    print "Optimization Finished!"

    #make prediction
    im = Image.open('/home/kendall/Desktop/HA900_frames/frame0035.tif')
    batch_x = np.array(im)
    # pdb.set_trace()
    batch_x = batch_x.reshape((1, n_input, n_input))
    batch_x = batch_x.astype(float)
    pdb.set_trace()
    prediction = sess.run(tf.nn.sigmoid(pred), feed_dict={x: batch_x, keep_prob: dropout})
    print prediction
    arr1 = np.empty((n_input,n_input))
    arr2 = np.empty((n_input,n_input))
    for i in xrange(n_input):
        for j in xrange(n_input):
            for k in xrange(2):
                if k == 0:
                    arr1[i][j] = (prediction[0][i][j][k])
                else:
                    arr2[i][j] = (prediction[0][i][j][k])
    # prediction = np.asarray(prediction)
    # prediction = np.reshape(prediction, (200,200))
    # np.savetxt("prediction.csv", prediction, delimiter=",")
    np.savetxt("prediction1.csv", arr1, delimiter=",")
    np.savetxt("prediction2.csv", arr2, delimiter=",")
    # np.savetxt("prediction2.csv", arr2, delimiter=",")

    # Calculate accuracy for 256 mnist test images
    print "Testing Accuracy:", \
        sess.run(accuracy, feed_dict={x: data.test.images[:256],
                                      y: data.test.labels[:256],
                                      keep_prob: 1.})

correct_pred变量(变量,其测量精度)的预测和所述地面真值之间的简单的减法运算符,然后与零比较(如果这两个相等,则差值应当为零)。

另外,我已经绘制了网络图,对我来说看起来很不理想。这是一张照片,我必须裁剪才能观看。

图片1

image2

编辑:我发现了为什么我的图看起来很糟糕(感谢奥利维尔),并且我还尝试了更改损失函数,但是一直没有结束-它仍然在相同的庄园中发散

with tf.name_scope("loss") as scope:
    # cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
    temp_pred = tf.reshape(pred, [-1, 2])
    temp_y = tf.reshape(y, [-1, 2])
    cost = (tf.nn.softmax_cross_entropy_with_logits(temp_pred, temp_y))

现在,编辑完整代码看起来像这样(仍然有所不同)

import tensorflow as tf
import pdb
import numpy as np
from numpy import genfromtxt
from PIL import Image

# Parameters
learning_rate = 0.001
training_iters = 10000
batch_size = 10
display_step = 1

# Network Parameters
n_input = 200 # MNIST data input (img shape: 28*28)
n_output = 40000
n_classes = 2 # MNIST total classes (0-9 digits)
#n_input = 200

dropout = 0.75 # Dropout, probability to keep units

# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input, n_input])
y = tf.placeholder(tf.float32, [None, n_input, n_input, n_classes])
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)


def convert_to_2_channel(x, batch_size):
    #assume input has dimension (batch_size,x,y)
    #output will have dimension (batch_size,x,y,2)
    output = np.empty((batch_size, 200, 200, 2))

    temp_arr1 = np.empty((batch_size, 200, 200))
    temp_arr2 = np.empty((batch_size, 200, 200))

    for i in xrange(batch_size):
        for j in xrange(3):
            for k in xrange(3):
                if x[i][j][k] == 1:
                    temp_arr1[i][j][k] = 1
                    temp_arr2[i][j][k] = 0
                else:
                    temp_arr1[i][j][k] = 0
                    temp_arr2[i][j][k] = 1

    for i in xrange(batch_size):
        for j in xrange(200):
            for k in xrange(200):
                for l in xrange(2):
                    if l == 0:
                        output[i][j][k][l] = temp_arr1[i][j][k]
                    else:
                        output[i][j][k][l] = temp_arr2[i][j][k]

    return output


# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
    # Conv2D wrapper, with bias and relu activation
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)

def maxpool2d(x, k=2):
    # MaxPool2D wrapper
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                          padding='SAME')


# Create model
def conv_net(x, weights, biases, dropout):
    # Reshape input picture
    x = tf.reshape(x, shape=[-1, 200, 200, 1])

    with tf.name_scope("conv1") as scope:
    # Convolution Layer
        conv1 = conv2d(x, weights['wc1'], biases['bc1'])
        # Max Pooling (down-sampling)
        #conv1 = tf.nn.local_response_normalization(conv1)
        conv1 = maxpool2d(conv1, k=2)

    # Convolution Layer
    with tf.name_scope("conv2") as scope:
        conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
        # Max Pooling (down-sampling)
        #conv2 = tf.nn.local_response_normalization(conv2)
        conv2 = maxpool2d(conv2, k=2)

    # Convolution Layer
    with tf.name_scope("conv3") as scope:
        conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
        # # Max Pooling (down-sampling)
        #conv3 = tf.nn.local_response_normalization(conv3)
        conv3 = maxpool2d(conv3, k=2)


    temp_batch_size = tf.shape(x)[0]
    with tf.name_scope("deconv1") as scope:
        output_shape = [temp_batch_size, 50, 50, 64]
        conv4 = tf.nn.conv2d_transpose(conv3, weights['wdc1'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
        conv4 = tf.nn.bias_add(conv4, biases['bdc1'])
        conv4 = tf.nn.relu(conv4)
        # conv4 = tf.nn.local_response_normalization(conv4)

    with tf.name_scope("deconv2") as scope:
        # output_shape = tf.pack([temp_batch_size, 100, 100, 32])
        output_shape = [temp_batch_size, 100, 100, 32]
        conv5 = tf.nn.conv2d_transpose(conv4, weights['wdc2'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
        conv5 = tf.nn.bias_add(conv5, biases['bdc2'])
        conv5 = tf.nn.relu(conv5)
        # conv5 = tf.nn.local_response_normalization(conv5)

    with tf.name_scope("deconv3") as scope:
        # output_shape = tf.pack([temp_batch_size, 200, 200, 1])
        output_shape = [temp_batch_size, 200, 200, 2]
        conv6 = tf.nn.conv2d_transpose(conv5, weights['wdc3'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
        conv6 = tf.nn.bias_add(conv6, biases['bdc3'])
    # conv6 = tf.nn.relu(conv6)
    # pdb.set_trace()
    conv6 = tf.nn.dropout(conv6, dropout)

    return conv6
    # Fully connected layer
    # Reshape conv2 output to fit fully connected layer input
    # fc1 = tf.reshape(conv6, [-1, weights['wd1'].get_shape().as_list()[0]])
    # fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
    # fc1 = tf.nn.relu(fc1)
    # # Apply Dropout
    # fc1 = tf.nn.dropout(fc1, dropout)
    #
    # return (tf.add(tf.matmul(fc1, weights['out']), biases['out']))# Store layers weight & bias

weights = {
    # 5x5 conv, 1 input, 32 outputs
    'wc1' : tf.Variable(tf.random_normal([5, 5, 1, 32])),
    # 5x5 conv, 32 inputs, 64 outputs
    'wc2' : tf.Variable(tf.random_normal([5, 5, 32, 64])),
    # 5x5 conv, 32 inputs, 64 outputs
    'wc3' : tf.Variable(tf.random_normal([5, 5, 64, 128])),

    'wdc1' : tf.Variable(tf.random_normal([2, 2, 64, 128])),

    'wdc2' : tf.Variable(tf.random_normal([2, 2, 32, 64])),

    'wdc3' : tf.Variable(tf.random_normal([2, 2, 2, 32])),

    # fully connected, 7*7*64 inputs, 1024 outputs
    'wd1': tf.Variable(tf.random_normal([80000, 1024])),
    # 1024 inputs, 10 outputs (class prediction)
    'out': tf.Variable(tf.random_normal([1024, 80000]))
}

biases = {
    'bc1': tf.Variable(tf.random_normal([32])),
    'bc2': tf.Variable(tf.random_normal([64])),
    'bc3': tf.Variable(tf.random_normal([128])),
    'bdc1': tf.Variable(tf.random_normal([64])),
    'bdc2': tf.Variable(tf.random_normal([32])),
    'bdc3': tf.Variable(tf.random_normal([2])),
    'bd1': tf.Variable(tf.random_normal([1024])),
    'out': tf.Variable(tf.random_normal([80000]))
}

# Construct model
# with tf.name_scope("net") as scope:
pred = conv_net(x, weights, biases, keep_prob)
pred = tf.reshape(pred, [-1,n_input,n_input,n_classes])
# Define loss and optimizer
with tf.name_scope("loss") as scope:
    # cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
    temp_pred = tf.reshape(pred, [-1, 2])
    temp_y = tf.reshape(y, [-1, 2])
    cost = (tf.nn.softmax_cross_entropy_with_logits(temp_pred, temp_y))

with tf.name_scope("opt") as scope:
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
    # optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)


# Evaluate model
with tf.name_scope("acc") as scope:
    correct_pred = tf.equal(0,tf.cast(tf.sub(tf.nn.softmax(temp_pred),y), tf.int32))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.initialize_all_variables()
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    summary = tf.train.SummaryWriter('/tmp/logdir/', sess.graph)
    step = 1
    from tensorflow.contrib.learn.python.learn.datasets.scroll import scroll_data
    data = scroll_data.read_data('/home/kendall/Desktop/')
    # Keep training until reach max iterations
    while step * batch_size < training_iters:
        batch_x, batch_y = data.train.next_batch(batch_size)
        # Run optimization op (backprop)
        batch_x = batch_x.reshape((batch_size, n_input, n_input))
        batch_y = batch_y.reshape((batch_size, n_input, n_input))
        batch_y = convert_to_2_channel(batch_y, batch_size) #converts the 200x200 ground truth to a 200x200x2 classification
        batch_y = batch_y.reshape(batch_size * n_input * n_input, 2)
        sess.run(optimizer, feed_dict={x: batch_x, temp_y: batch_y,
                                       keep_prob: dropout})
        #measure prediction
        prediction = sess.run(tf.nn.softmax(temp_pred), feed_dict={x: batch_x, keep_prob: dropout})
        print prediction
        if step % display_step == 0:
            # Calculate batch loss and accuracdef conv_net(x, weights, biases, dropout):
            save_path = "model.ckpt"
            saver.save(sess, save_path)
            loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                              y: batch_y,
                                                              keep_prob: dropout})
            print "Accuracy = " + str(acc)
            if acc > 0.73:
                break
        step += 1
    print "Optimization Finished!"

    #make prediction
    im = Image.open('/home/kendall/Desktop/HA900_frames/frame0035.tif')
    batch_x = np.array(im)
    # pdb.set_trace()
    batch_x = batch_x.reshape((1, n_input, n_input))
    batch_x = batch_x.astype(float)
    pdb.set_trace()
    prediction = sess.run(tf.nn.sigmoid(pred), feed_dict={x: batch_x, keep_prob: dropout})
    print prediction
    arr1 = np.empty((n_input,n_input))
    arr2 = np.empty((n_input,n_input))
    for i in xrange(n_input):
        for j in xrange(n_input):
            for k in xrange(2):
                if k == 0:
                    arr1[i][j] = (prediction[0][i][j][k])
                else:
                    arr2[i][j] = (prediction[0][i][j][k])
    # prediction = np.asarray(prediction)
    # prediction = np.reshape(prediction, (200,200))
    # np.savetxt("prediction.csv", prediction, delimiter=",")
    np.savetxt("prediction1.csv", arr1, delimiter=",")
    np.savetxt("prediction2.csv", arr2, delimiter=",")
    # np.savetxt("prediction2.csv", arr2, delimiter=",")

    # Calculate accuracy for 256 mnist test images
    print "Testing Accuracy:", \
        sess.run(accuracy, feed_dict={x: data.test.images[:256],
                                      y: data.test.labels[:256],
                                      keep_prob: 1.})
奥利维尔·莫恩德罗特(Olivier Moindrot)

反卷积的概念是输出与输入大小相同的东西。

在此行:

conv6 = tf.nn.bias_add(conv6, biases['bdc3'])

您具有shape的输出[batch_size, 200, 200, 2],因此无需添加完全连接的层。只需返回即可conv6(无需最终的ReLU)。


如果在预测中使用2个类别并使用true标签y,则需要使用tf.nn.softmax_cross_entropy_with_logits(),而不是S型交叉熵。

确保y始终具有以下值:y[i, j] = [0., 1.]y[i, j] = [1., 0.]

pred = conv_net(x, weights, biases, keep_prob)  # NEW prediction conv6
pred = tf.reshape(pred, [-1, n_classes])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))

而且,如果您希望TensorBoard图看起来不错(或至少可读),请确保使用 tf.name_scope()


编辑:

您的准确性也是错误的。您可以测量softmax(pred)和,y是否等于,但softmax(pred)永远不能等于0.1.,因此您的精度为0.

这是您应该做的:

with tf.name_scope("acc") as scope:
    correct_pred = tf.equal(tf.argmax(temp_pred, 1), tf.argmax(temp_y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

编辑2:

真正的错误是convert_to_2_channel在循环中出现错字

for j in xrange(3):

它应该是200而不是3。

课程:调试时,使用非常简单的示例逐步打印所有内容,您会发现越野车功能返回错误的输出。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

Tensorflow DNNClassifier返回错误的预测

来自分类Dev

Keras / Tensorflow预测:数组形状错误

来自分类Dev

Tensorflow 回归正在预测错误的值

来自分类Dev

XOR Tensorflow不收敛

来自分类Dev

nls-收敛错误

来自分类Dev

如何解释 TensorFlow 中的预测,它们的形状似乎错误

来自分类Dev

nls收敛但给出错误

来自分类Dev

Tensorflow:CNN 训练在零向量处收敛

来自分类Dev

Tensorflow:获得预测

来自分类Dev

捕获异常:错误:svd()无法收敛

来自分类Dev

LME4开发版本的收敛错误

来自分类Dev

梯度下降收敛到错误的值

来自分类Dev

在 tensorflow.js 中加载保存模型后使用自定义模型进行错误预测

来自分类Dev

StopIteration不可预测的错误

来自分类常见问题

使用TensorFlow模型进行预测

来自分类Dev

Tensorflow预测总是相同的结果

来自分类Dev

TensorFlow LSTM预测相同的值

来自分类Dev

Tensorflow和Keras预测阈值

来自分类Dev

Tensorflow,预测值的概率(ROI)

来自分类Dev

使用 TensorFlow 预测新数据

来自分类Dev

Maven 依赖收敛错误(使用了错误的版本)

来自分类Dev

Tensorflow多层感知器图不会收敛

来自分类Dev

Tensorflow 线性回归没有收敛到正确的成本

来自分类Dev

Tensorflow 2.2.0错误:使用双向LSTM层时,[预测必须> 0] [条件x> = y不按元素进行:]

来自分类Dev

使用optimize.fmin_l_bfgs_b的错误收敛

来自分类Dev

scikit:这种情况的错误预测

来自分类Dev

预测statsmodel参数错误

来自分类Dev

机器学习模型错误预测

来自分类Dev

OneR WEKA-错误的预测?