2017-04-20 184 views
0

我有一臺運行在Ubuntu 14.04上的Nvidia GTX 1080。我試圖使用tensorflow 1.0.1來實現卷積自動編碼器,但程序似乎根本沒有使用GPU。我使用watch nvidia-smihtop對此進行了驗證。運行該程序後的輸出如下:Tensorflow代碼不使用GPU

1 I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally 
    2 I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally 
    3 I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally 
    4 I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally 
    5 I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally 
    6 Extracting MNIST_data/train-images-idx3-ubyte.gz 
    7 Extracting MNIST_data/train-labels-idx1-ubyte.gz 
    8 Extracting MNIST_data/t10k-images-idx3-ubyte.gz 
    9 Extracting MNIST_data/t10k-labels-idx1-ubyte.gz 
10 getting into solving the reconstruction loss 
11 Dimension of z i.e. our latent vector is [None, 100] 
12 Dimension of the output of the decoder is [100, 28, 28, 1] 
13 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available  on your machine and could speed up CPU computations. 
14 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are availab le on your machine and could speed up CPU computations. 
15 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are availab le on your machine and could speed up CPU computations. 
16 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available  on your machine and could speed up CPU computations. 
17 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available  on your machine and could speed up CPU computations. 
18 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available  on your machine and could speed up CPU computations. 
19 I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
20 name: GeForce GTX 1080 
21 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 
22 pciBusID 0000:0a:00.0 
23 Total memory: 7.92GiB 
24 Free memory: 7.81GiB 
25 W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x34bccc0 
26 I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties: 
27 name: GeForce GTX 1080 
28 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 
29 pciBusID 0000:09:00.0 
30 Total memory: 7.92GiB 
31 Free memory: 7.81GiB 
32 W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x34c0940 
33 I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 2 with properties: 
34 name: GeForce GTX 1080 
35 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 
36 pciBusID 0000:06:00.0 
37 Total memory: 7.92GiB 
38 Free memory: 7.81GiB 
39 W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x34c45c0 
40 I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 3 with properties: 
41 name: GeForce GTX 1080 
42 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 
43 pciBusID 0000:05:00.0 
44 Total memory: 7.92GiB 
45 Free memory: 7.81GiB 
46 I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1 2 3 
47 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y Y Y Y 
48 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1: Y Y Y Y 
49 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 2: Y Y Y Y 
50 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 3: Y Y Y Y 
51 I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus i d: 0000:0a:00.0) 
52 I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080, pci bus i d: 0000:09:00.0) 
53 I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:2) -> (device: 2, name: GeForce GTX 1080, pci bus i d: 0000:06:00.0) 
54 I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:3) -> (device: 3, name: GeForce GTX 1080, pci bus i d: 0000:05:00.0) 

能有在我的代碼中的問題,我自己也嘗試將其指定建築圖形之前使用使用with tf.device("/gpu:0"):特定的設備。請告訴我是否需要更多信息。的

編輯1輸出NVIDIA-SMI

[email protected]:~$ nvidia-smi 
Wed Apr 19 20:50:07 2017  
+-----------------------------------------------------------------------------+ 
| NVIDIA-SMI 367.48     Driver Version: 367.48     | 
|-------------------------------+----------------------+----------------------+ 
| GPU Name  Persistence-M| Bus-Id  Disp.A | Volatile Uncorr. ECC | 
| Fan Temp Perf Pwr:Usage/Cap|   Memory-Usage | GPU-Util Compute M. | 
|===============================+======================+======================| 
| 0 GeForce GTX 1080 Off | 0000:05:00.0  Off |     N/A | 
| 38% 54C P8 12W/180W | 7715MiB/8113MiB |  0%  Default | 
+-------------------------------+----------------------+----------------------+ 
| 1 GeForce GTX 1080 Off | 0000:06:00.0  Off |     N/A | 
| 38% 55C P8  8W/180W | 7715MiB/8113MiB |  0%  Default | 
+-------------------------------+----------------------+----------------------+ 
| 2 GeForce GTX 1080 Off | 0000:09:00.0  Off |     N/A | 
| 36% 50C P8  8W/180W | 7715MiB/8113MiB |  0%  Default | 
+-------------------------------+----------------------+----------------------+ 
| 3 GeForce GTX 1080 Off | 0000:0A:00.0  Off |     N/A | 
| 35% 54C P2 41W/180W | 7833MiB/8113MiB |  8%  Default | 
+-------------------------------+----------------------+----------------------+ 

+-----------------------------------------------------------------------------+ 
| Processes:              GPU Memory | 
| GPU  PID Type Process name        Usage  | 
|=============================================================================| 
| 0  24228 C python3          7713MiB | 
| 1  24228 C python3          7713MiB | 
| 2  24228 C python3          7713MiB | 
| 3  24228 C python3          7831MiB | 
+-----------------------------------------------------------------------------+ 

HTOP表明,它使用大約100%的CPU內核中的一個。我說它不使用GPU的基礎是因爲GPU使用率。它顯示了8%,但通常是0%。

+0

它看起來像找到4個GPU就好,我沒有看到任何異常的輸出。你不需要指定'tf.device(「/ gpu:0」)'。訓練期間是否使用了所有的CPU?你能粘貼nvidia-smi的輸出嗎?你看到nividia-smi輸出中的python進程,或者GPU使用率似乎爲0%? –

+0

@DavidParks我已經添加了nvidia-smi的輸出,並且python進程在那裏。 –

回答

0

所以你在GPU上運行,從這個角度來看所有配置都是正確的,但是看起來速度真的很糟糕。確保你多次運行nvidia-smi以瞭解它是如何實現的,它可能一時顯示100%,另一個顯示8%。

從GPU中獲得約80%的利用率是很正常的,因爲在每次運行之前從核心內存加載到GPU的每個批次都會有時間浪費(有很多新功能即將推出以改進GPU隊列TF)。

如果性能低於GPU的〜80%,則表明您做錯了。有兩個可能的和常見的原因,想到:

1)你正在做一堆預處理之間的步驟,所以GPU運行快,但然後你被阻止在一個單一的CPU線程做一個一些非張量流工作。將其移至自己的線程,將數據從Python加載到GPU Queue

2)大塊數據在CPU和GPU內存之間來回移動。如果你這樣做,CPU和GPU之間的帶寬可能是一個瓶頸。

嘗試在培訓/推理批次開始和結束之間添加一些計時器,並查看您是否在張量流操作之外花費了大量時間。

+0

感謝您的建議工作。它的使用率始終在90%左右。我還需要一點建議。目前它只使用GPU的一個核心,其餘的都在0%。我該如何解決這個問題。 –

+0

在這裏有一個關於使用多個GPU的討論,底部還有一個鏈接用於示例實現:https://www.tensorflow.org/tutorials/using_gpu –

+0

@saharudra,你可以在以下鏈接:https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py – Nandeesh