How come when the numpy array is a vector, the setting works and the dtype
is implicitly converted to float but when the numpy array is a matrix, the setting works but the dtype
is still int. Here's a demo script to illustrate the problem.
import numpy as np
# successfully sets / converts
x = np.array([100, 101])
c = -np.max(x)
x += c
print 'before', x.dtype
x = np.exp(x)
print 'after', x.dtype
print x
# doesn't successfully set / convert
matrix = np.array([(100, 101), (102, 103)])
for i in range(len(matrix)):
c = -np.max(matrix[i])
matrix[i] += c
print 'before', matrix[i].dtype
matrix[i] = np.exp(matrix[i])
print 'after', matrix[i].dtype
print matrix
output:
before int64
after float64 <-- from vector
[ 0.36787944 1. ]
before int64
after int64 <-- from row 1 of matrix
before int64
after int64 <-- from row 2 of matrix
[[0 1]
[0 1]]
The numbers are integer truncated, which was my original problem, traced down to this.
I'm using Python 2.7.11
and numpy 1.13.0
Whenever you write a value into an existing array, the value is cast to match the array dtype
. In your case, the resulting float64
value is cast to int64
:
b = numpy.arange(4).reshape(2, 2)
b.dtype # dtype('int64')
taking numpy.exp()
of any of these values will return a float64
:
numpy.exp(b[0, :]).dtype # dtype('float64')
but if you now take this float64
and write it back into the original int64
array, it needs to be cast first:
b[0, :] = numpy.exp(b[0, :])
b.dtype # dtype('int64')
Note that using
b = numpy.exp(b)
creates a new array with its own dtype
. If instead you did
b[:] = numpy.exp(b[:])
you would be implicitly casting to int64
again.
Also note that there is no need to write a loop like you did. Instead you can vectorize the operation:
np.exp(matrix - numpy.max(matrix, axis=1, keepdims=True))
# array([[ 0.36787944, 1. ],
# [ 0.36787944, 1. ]])
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments