2016-08-17 199 views
1

我是TensorFlow的新手,已根據TensorFlow網站上的說明安裝了CUDA-7.5和cudnn-v4。調整TensorFlow配置文件,並試圖運行從網站下面的例子後:TensorFlow從多個GPU選擇GPU使用

python -m tensorflow.models.image.mnist.convolutional 

我敢肯定,但是TensorFlow使用的GPU,而不是其他的一個,我想它使用速度更快。我想知道這個示例代碼是否默認使用它找到的第一個GPU。如果是這樣,我該如何選擇在我的TensorFlow代碼中使用哪個GPU?

運行示例代碼時,我得到的消息是:

ldt-tesla:~$ python -m tensorflow.models.image.mnist.convolutional 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally 
Extracting data/train-images-idx3-ubyte.gz 
Extracting data/train-labels-idx1-ubyte.gz 
Extracting data/t10k-images-idx3-ubyte.gz 
Extracting data/t10k-labels-idx1-ubyte.gz 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Tesla K20c 
major: 3 minor: 5 memoryClockRate (GHz) 0.7055 
pciBusID 0000:03:00.0 
Total memory: 4.63GiB 
Free memory: 4.57GiB 
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x2f27390 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties: 
name: Quadro K2200 
major: 5 minor: 0 memoryClockRate (GHz) 1.124 
pciBusID 0000:02:00.0 
Total memory: 3.95GiB 
Free memory: 3.62GiB 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 1 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y N 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1: N Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20c, pci bus id: 0000:03:00.0) 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:793] Ignoring gpu device (device: 1, name: Quadro K2200, pci bus id: 0000:02:00.0) with Cuda multiprocessor count: 5. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT. 
Initialized! 

回答

6

可以設置CUDA_VISIBLE_DEVICES環境變量只露出你想要的那些,引用在masking gpus這個例子:

CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen 
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible 
CUDA_VISIBLE_DEVICES=」0,1」 Same as above, quotation marks are optional 
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked 
+0

謝謝!這似乎做了這項工作,並擺脫了那個錯誤:)。我還會收到一條消息,指出「使用cuda multiprocessor count忽略GPU設備5.所需的最小數量爲8.您可以使用...調整此需求」。做同樣的事情你建議,我可以使用環境變量來改變計數,但我不知道這意味着什麼。計數/最小計數是什麼意思?謝謝! –

+0

其他選項 - https://stackoverflow.com/questions/40069883/how-to-set-specific-gpu-in-tensorflow/44848050#44848050 – Nandeesh