tensorflow - What the meaning of the plot of tensorboard when using Queues? -


i use tensorboard monitor training process , plot good, there plots confuse me.

first using_queues_lib.py:(it using queues , multithreads read binary data,reference cifar 10 example)

from __future__ import absolute_import __future__ import division __future__ import print_function  import os  six.moves import xrange  # pylint: disable=redefined-builtin import tensorflow tf  num_examples_per_epoch_for_train = 50000 real32_bytes=4   def read_dataset(filename_queue,data_length,label_length):   class record(object):     pass   result = record()    result_data  = data_length*real32_bytes   result_label = label_length*real32_bytes   record_bytes = result_data + result_label    reader = tf.fixedlengthrecordreader(record_bytes=record_bytes)   result.key, value = reader.read(filename_queue)    record_bytes = tf.decode_raw(value, tf.float32)   result.data  = tf.strided_slice(record_bytes, [0],[data_length])#record_bytes: tf.float list   result.label = tf.strided_slice(record_bytes, [data_length],[data_length+label_length])   return result   def _generate_data_and_label_batch(data, label, min_queue_examples,batch_size, shuffle):   num_preprocess_threads = 16   #only speed code   if shuffle:     data_batch, label_batch = tf.train.shuffle_batch([data, label],batch_size=batch_size,num_threads=num_preprocess_threads,capacity=min_queue_examples + batch_size,min_after_dequeue=min_queue_examples)   else:     data_batch, label_batch = tf.train.batch([data, label],batch_size=batch_size,num_threads=num_preprocess_threads,capacity=min_queue_examples + batch_size)   return data_batch, label_batch  def inputs(data_dir, batch_size,data_length,label_length):   filenames = [os.path.join(data_dir, 'test_data_se.dat')]   f in filenames:     if not tf.gfile.exists(f):       raise valueerror('failed find file: ' + f)    filename_queue = tf.train.string_input_producer(filenames)    read_input = read_dataset(filename_queue,data_length,label_length)    read_input.data.set_shape([data_length])   #important   read_input.label.set_shape([label_length]) #important     min_fraction_of_examples_in_queue = 0.4   min_queue_examples = int(num_examples_per_epoch_for_train *                        min_fraction_of_examples_in_queue)   print ('filling queue %d samples before starting train. '      'this take few minutes.' % min_queue_examples)    return _generate_data_and_label_batch(read_input.data, read_input.label,                                      min_queue_examples, batch_size,                                      shuffle=true) 

in main function, write:

data_train,labels_train=using_queues_lib.inputs(           filenames=r'./training.dat',           batch_size=32,                                                data_length=2,                                           label_length=1,           name='training') data_validate,labels_validate=using_queues_lib.inputs(           filenames=r'./validating.dat',           batch_size=32*30,           data_length=2,           label_length=1,           name='validating') 

and summary part is:

with tf.name_scope('loss'):     loss = tf.reduce_mean(tf.square(y_ - y))     loss_summary=tf.summary.scalar('loss', loss) tf.name_scope('train'):     global_step=tf.variable(0,trainable=false)     learning_rate=...     tf.summary.scalar('learning_rate', learning_rate)     train_step =tf.train.gradientdescentoptimizer(learning_rate).minimize(loss,global_step=global_step)  sess=tf.interactivesession(config=config) summary_op = tf.summary.merge_all() summaries_dir = './logs' train_writer = tf.summary.filewriter(summaries_dir + '/train', sess.graph) validate_writer = tf.summary.filewriter(summaries_dir + '/validate')  tf.global_variables_initializer().run() tf.train.start_queue_runners()  epoch in xrange(training_epochs):     batchnum_per_epoch=training_data_samples_length/batch_size     in xrange(batchnum_per_epoch):         data_batch,label_batch=sess.run([data_train,labels_train])         summary, _=sess.run([summary_op,train_step], feed_dict={x: data_batch, y_: label_batch})         train_writer.add_summary(summary, sess.run(global_step))      data_batch_validate,label_batch_validate=              sess.run([data_validate,labels_validate])     summary, loss_value_validate=sess.run([loss_summary,loss],               feed_dict={x: data_batch_validate, y_: label_batch_validate})     validate_writer.add_summary(summary, sess.run(global_step)) 

in tensorboard see don't know means.

first: enter image description here

second: enter image description here

you didn’t post source code shows summary part, graph think plotting fraction of num_elements_in_the_queue/capacity_of_the_queue @ each summary step (the light color vertical lines data points, while darker orange color smoothed average).


Comments

Popular posts from this blog

php - Permission denied. Laravel linux server -

google bigquery - Delta between query execution time and Java query call to finish -

python - Pandas two dataframes multiplication? -