neural network - How to shift a sequence (hidden layer) to the left in Keras? -
i have code seems job, behaves differently expect.
i want have (non-trainable) layer part of deep learning model, shifts sequence of vectors (hidden layer) 1 step left. framework keras2 theano backend.
to provide minimal example, if input layer 3-time-step sequence, 2 hidden components
[[0, 1], [2, 3], [4, 5]]
then output of layer should shifted left (with 0 padding):
[[2, 3], [4, 5], [0, 0]]
i figured 1-d convolution job, specify weights appropriately. convolution size of 3, set kernel weights left , middle position 0, , have diagonal weights right position (just copying 1st , 2nd dimension):
[[[ 0., 0.], [ 0., 0.]], [[ 0., 0.], [ 0., 0.]], [[ 1., 0.], [ 0., 1.]]]
however, if this, vector gets shifted right, not left. working example:
import keras import numpy np dim, length = 2,3 input_mat = np.arange(dim*length).reshape(1,length,dim) inp = keras.layers.input(shape=(length,dim)) shift_left_kernel = np.asarray([np.zeros((dim,dim)),np.zeros((dim,dim)), np.eye(dim)]) outp = keras.layers.convolution1d(dim, length, padding='same', kernel_initializer='zeros', use_bias=false, trainable=false, weights=[shift_left_kernel])(inp) model_network = keras.models.model(inputs=inp, outputs=outp) print(model_network.predict([input_mat])) #[[[ 0. 0.] # [ 0. 1.] # [ 2. 3.]]]
instead, need use
shift_left_kernel = np.asarray([np.eye(dim), np.zeros((dim,dim)),np.zeros((dim,dim))])
which seems illogical me (i expect shift right, not left). crack in logic?
it seems have solved this, agree right-left confusing. think goes down low-level implementation choice whether reverse convolution kernel or not. tensorflow documentation: https://www.tensorflow.org/versions/r0.11/api_docs/python/nn/convolution
note although these ops called "convolution", strictly speaking "cross-correlation" since filter combined input window without reversing filter. details, see properties of cross-correlation.
Comments
Post a Comment