在张量流中扩展张量

杰瑞德(Jared)

在TensorFlow中,我打算使用sin(x)具有某些近似项的Taylor级数来操纵张量为此,我尝试(32,32)用sin(x)的泰勒级数操纵灰度图像(的形状),并且效果很好。现在,在处理形状为的灰度图像到形状(32,32)为的RGB图像时,我遇到了同样的麻烦(32,32,3),并且无法为我提供正确的数组。凭直觉,我正在尝试通过Taylor的展开来操纵张量sin(x)谁能告诉我在tensorflow中做到这一点的可能方法吗?任何的想法?

我的尝试

这里是泰勒展开sin(x)x=0: 1- x + x**2/2 - x**3/6三个扩展项。

from tensorflow.keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()

x= X_train[1,:,:,1]
k= 3
func = 'sin(x)'

new_x = np.zeros((x.shape[0], x.shape[1]*k))
new_x = new_x.astype('float32') 
nn = 0
for i in range(x.shape[1]):
    col_d = x[:,i].ravel()
    new_x[:,nn] = col_d
    if n_terms > 0:
        for j in range(1,k):
            if func == 'cos(x)':
                new_x[:,nn+j] = new_x[:,nn+j-1]

我想我可以使用TensorFlow来更有效地做到这一点,但是这对我来说不是很直观。谁能建议一种可行的解决方法来完成这项工作?任何想法?

更新

2dim数组col_d = x[:,i].ravel()中的像素向量将2个dim数组展平。同样,我们可以通过以下方式将3dim数组重塑为2个dim:x.transpose(0,1,2).reshape(x.shape[1],-1)在for循环中,它可能是x[:,i].transpose(0,1,2).reshape(x.shape[1],-1),但这仍然不正确。我认为张量流可能有更好的方法来做到这一点。我们如何才能sin(x)更有效地利用taylor级数来操纵张量有什么想法吗?

目标

直观地说,在泰勒级数的sin(x)x是张量,如果我们只想要2,泰勒系列的3个近似条件sin(x)对每个张量,我希望concatenate他们在新的张量。我们应该如何在TensorFlow中有效地做到这一点?有什么想法吗?

第4名

new_x = np.zeros((x.shape [0],x.shape [1] * n_terms))

这行没有意义,为什么要为3个taylor扩展项分配96个元素的空间。

(new_x[:, 3:] == 0.0).all() = True # check

使用n项进行像素泰勒展开


def sin_exp_step(x, i):

  c1 = 2 * i + 1
  c2 = (-1) ** i / np.math.factorial(c1)

  t = c2 * (x ** c1) 
  
  return t

# validate

x = 45.0
x = (np.pi / 180.0) * x 

y = np.sin(x)

approx_y = 0

for i in range(n_terms):

  approx_y += sin_exp_step(x, i)

abs(approx_y - y) < 1e-8

x= X_train[1,:,:,:]
n_terms = 3
func = 'sin(x)'

new_x = np.zeros((*x.shape, n_terms))

for i in range(0, n_terms):

  if func == 'sin(x)': # sin(x)

    new_x[..., i] += sin_exp_step(x, i)

通常避免使用数值逼近方法,因为它们计算量大(阶乘)且稳定性较差,因此对于此类高阶导数算法BFGSLBFGS用于近似粗麻布矩阵(二阶导数),基于梯度的优化通常是最佳的诸如Adam&SGD之类的优化器就足够了,并且具有更少的计算量。使用神经网络,我们也许能够找到更好的扩展。


用于n项扩展的Tensorflow解决方案

import numpy as np

import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.layers import Input, LocallyConnected2D
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K

(x_train, y_train), (x_test, y_test) = cifar10.load_data()

x_train = tf.constant(x_train, dtype=tf.float32)
x_test = tf.constant(x_test, dtype=tf.float32)

def expansion_approx_of(func):

  def reconstruction_loss(y_true, y_pred):

    loss = (y_pred - func(y_true)) ** 2
    loss = 0.5 * K.mean(loss)

    return loss

  return reconstruction_loss

class Expansion2D(LocallyConnected2D): # n-terms expansion layer

  def __init__(self, i_shape, n_terms, kernel_size=(1, 1), *args, **kwargs):
    
    if len(i_shape) != 3:
      
      raise ValueError('...')

    self.i_shape = i_shape
    self.n_terms = n_terms

    filters = self.n_terms * self.i_shape[-1]
    
    super(Expansion2D, self).__init__(filters=filters, kernel_size=kernel_size,
                                      use_bias=False, *args, **kwargs)
    
  def call(self, inputs):

    shape =  (-1, self.i_shape[0], self.i_shape[1], self.i_shape[-1], self.n_terms)

    out = super().call(inputs)

    expansion = tf.reshape(out, shape)
    
    out = tf.math.reduce_sum(expansion, axis=-1)
    
    return out, expansion

inputs = Input(shape=(32, 32, 3))

# expansion: might be a taylor expansion or something better.
out, expansion = Expansion2D(i_shape=(32, 32, 3), n_terms=3)(inputs)

model = Model(inputs, [out, expansion])

opt = tf.keras.optimizers.Adam(learning_rate=0.0001, beta_1=0.9, beta_2=0.999)
loss = expansion_approx_of(K.sin)

model.compile(optimizer=opt, loss=[loss])

model.summary()

model.fit(x_train, x_train, batch_size=1563, epochs=100)

x_pred, x_exp = model.predict_on_batch(x_test[:32])

print((x_exp[0].sum(axis=-1) == x_pred[0]).all())

err = abs(x_pred - np.sin(x_test[0])).mean()

print(err)

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章