Gradients of Logical Operators in Tensorflow

nfmcclure

I'm trying to create a very simple binary classifier in Tensorflow on generated data.

I'm generating random data from two separate normal distributions. Then I will classify the resulting data to a binary class if it it less than or greater than a number, A.

Ideally, A will be a cutoff in the middle of both normals. E.g. if my data is generated by N(1,1) + N(-1,1), then A should be approximately 0.

I'm runnning into a "No gradients provided for any variable..." error. Specifically:

No gradients provided for any variable: ((None, <tensorflow.python.ops.variables.Variable object at 0x7fd9e3fae710>),)

I think it may have to do with the fact that Tensorflow cannot calculate gradients for logical operators. My classification for any given A value is supposed to be something like:

Given a data point x and an A value:

[1,0] : if x < A

[0,1] : if x >= A

Given that idea, here is my calculation in Tensorflow for the output:

my_output = tf.concat(0,[tf.to_float(tf.less(x_data, A)), tf.to_float(tf.greater_equal(x_data, A))])

Is this the wrong way to implement this output? Is there a non-logical functional equivalent?

Thanks. If you want to see my whole code, here is a gist: https://gist.github.com/nfmcclure/46c323f0a55ae1628808f7a58b5d437f


Edit: Full Stack Trace:

Traceback (most recent call last):

  File "<ipython-input-182-f8837927493d>", line 1, in <module>
    runfile('/.../back_propagation.py', wdir='/')

  File "/usr/local/lib/python3.4/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 699, in runfile
execfile(filename, namespace)

  File "/usr/local/lib/python3.4/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 88, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)

  File "/.../back_propagation.py", line 94, in <module>
train_step = my_opt.minimize(xentropy)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/training/optimizer.py", line 192, in minimize
name=name)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/training/optimizer.py", line 286, in apply_gradients
(grads_and_vars,))

ValueError: No gradients provided for any variable: ((None, <tensorflow.python.ops.variables.Variable object at 0x7fd9e3fae710>),)
szabadaba

Typically you would use a sigmoid function to pin the output of your function to the range of 0 to 1. You want to train the following function:

y = a*x_input + b, where a and b are trainable variables.

The loss function you would use would then be tf.sigmoid_cross_entropy_with_logits

And to evaluate the class you would evaluate sigmoid(y) > 0.5. The greater than logical operator does not have a gradient to create an optimization function.

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related