python - sparse_softmax_cross_entropy_with_logits results is worse than softmax_cross_entropy_with_logits -


i implement classic image classification problem tensorflow, have 9 classes, first use softmax_cross_entropy_with_logits classifier , train network, after steps gives 99% train accuracy,

then test same problem sparse_softmax_cross_entropy_with_logits time doesn't converge @ all,(train accuracy around 0.10 , 0.20)

only information, softmax_cross_entropy_with_logits, use [batch_size, num_classes] dtype float32 labels, , sparse_softmax_cross_entropy_with_logits use [batch_size] dtype int32 labels.

does have idea?

update:

this code:  def costfun(self):       self.y_ = tf.reshape(self.y_, [-1])      return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(self.score_, self.y_))  def updatefun(self):     return tf.train.adamoptimizer(learning_rate = self.lr_).minimize(self.cost_)  def perffun(self):     correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))     return(tf.reduce_mean(tf.cast(correct_pred, tf.float32)))  def __init__(self,x,y,lr,lyr1filterno,lyr2filterno,lyr3filterno,fchidlyrsize,inlyrsize,outlyrsize, keepprob):      self.x_            = x     self.y_            = y     self.lr_           = lr     self.inlyrsize     = inlyrsize     self.outlyrsize_   = outlyrsize     self.lyr1filterno_ = lyr1filterno     self.lyr2filterno_ = lyr2filterno     self.lyr3filterno_ = lyr3filterno     self.fchidlyrsize_ = fchidlyrsize     self.keepprob_     = keepprob      [self.params_w_, self.params_b_] = convnet.paramsfun(self)      self.score_, self.packshow_      = convnet.scorefun (self)      self.cost_                       = convnet.costfun  (self)      self.update_                     = convnet.updatefun(self)      self.perf_                       = convnet.perffun  (self)  

main:

lyr1filterno = 32  lyr2filterno = 64  lyr3filterno = 128   fchidlyrsize = 1024 inlyrsize    = 32 * 32   outlyrsize   = 9 lr           = 0.001 batch_size   = 300  dropout      = 0.5 x            = tf.placeholder(tf.float32, [none, inlyrsize ]) y            = tf.placeholder(tf.int32,    none             )   convnet_class = convnet(x,y,lr,lyr1filterno,lyr2filterno,lyr3filterno,fchidlyrsize,inlyrsize,outlyrsize, keepprob) initvar = tf.global_variables_initializer()   tf.session() sess:     sess.run(initvar)         step in range(10000):           trdata_i  = np.reshape( trdata_i , ( -1, 32 * 32 ) )          trlabel_i = np.reshape( trlabel_i, ( -1, 1       ) )            update_i, packshow, wlyr1_i, wlyr2_i, wlyr3_i = sess.run([convnet_class.update_, convnet_class.packshow_,                             convnet_class.params_w_['wlyr1'], convnet_class.params_w_['wlyr2'], convnet_class.params_w_['wlyr3']],                              feed_dict = { x:trdata_i, y:trlabel_i, keepprob:dropout} ) 

i found problem, @mrry helpful comment, mistake calculation of accuracy, in fact, "sparse_softmax" , "softmax" has same loss(or cost) input logits,

for computation accuracy, change

correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))

to

correct_pred = tf.equal(tf.argmax(self.score_,1), y ))

since in "sparse_softmax" ground truth labels not in one-hot vector format, real int32 or int64 numbers.


Comments

Popular posts from this blog

php - Permission denied. Laravel linux server -

google bigquery - Delta between query execution time and Java query call to finish -

python - Pandas two dataframes multiplication? -