2
我有一個在CPU上動態生成的數據集。樣本是通過函數make_sample
在python中計算的,它非常複雜,並且不能被翻譯成tensorflow操作。由於樣例生成非常耗時,因此我想從多個線程調用函數來填充輸入隊列。在CPU預處理期間使用多線程時Tensorflow速度較慢
我從example given in the documentation開始,得出了以下的玩具例子:
import numpy as np
import tensorflow as tf
import time
def make_sample():
# something that takes time and needs to be on CPU w/o tf ops
p = 1
for n in range(1000000):
p = (p + np.random.random()) * np.random.random()
return np.float32(p)
read_threads = 1
with tf.device('/cpu:0'):
example_list = [tf.py_func(make_sample, [], [tf.float32]) for _ in range(read_threads)]
for ex in example_list:
ex[0].set_shape(())
batch_size = 3
capacity = 30
batch = tf.train.batch_join(example_list, batch_size=batch_size, capacity=capacity)
with tf.Session().as_default() as sess:
tf.global_variables_initializer().run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
# dry run, left out of timing
sess.run(batch)
start_time = time.time()
for it in range(5):
print(sess.run(batch))
finally:
duration = time.time() - start_time
print('duration: {0:4.2f}s'.format(duration))
coord.request_stop()
coord.join(threads)
最讓我驚訝的是,增加read_threads
時,CPU使用率永遠不會超過50%。更糟的是,計算時間驟降:我的電腦上,
read_threads=1
→duration: 12s
read_threads=2
→duration: 46s
read_threads=4
→duration: 68s
read_threads=8
→duration: 112s
是否有一個解釋,最重要的是,獲得高效多線程的解決方案在tensorflow上使用自定義python函數生成數據?