2014-08-28 22 views
1

我想將所有線程劃分爲2個不同的組,因爲我有兩個並行任務異步運行。例如,如果共有8個線程可用,那麼我會將6個線程專用於task1,另外2個專用於task2。OpenMP:將所有線程劃分爲不同的組

如何通過OpenMP實現此目標?

回答

5

這是OpenMP nested parallelism的作業,從OpenMP 3開始:可以使用OpenMP tasks來啓動兩個獨立的任務,然後在這些任務中,使用適當數量的線程並行部分。

作爲一個簡單的例子:

#include <stdio.h> 
#include <omp.h> 

int main(int argc, char **argv) { 

    omp_set_nested(1); /* make sure nested parallism is on */ 
    int nprocs = omp_get_num_procs(); 
    int nthreads1 = nprocs/3; 
    int nthreads2 = nprocs - nthreads1; 

    #pragma omp parallel default(none) shared(nthreads1, nthreads2) num_threads(2) 
    #pragma omp single 
    { 
     #pragma omp task 
     #pragma omp parallel for num_threads(nthreads1) 
     for (int i=0; i<16; i++) 
      printf("Task 1: thread %d of the %d children of %d: handling iter %d\n", 
         omp_get_thread_num(), omp_get_team_size(2), 
         omp_get_ancestor_thread_num(1), i); 
     #pragma omp task 
     #pragma omp parallel for num_threads(nthreads2) 
     for (int j=0; j<16; j++) 
      printf("Task 2: thread %d of the %d children of %d: handling iter %d\n", 
         omp_get_thread_num(), omp_get_team_size(2), 
         omp_get_ancestor_thread_num(1), j); 
    } 

    return 0; 
} 

一個8芯(16個硬件線程)節點上運行此,

$ gcc -fopenmp nested.c -o nested -std=c99 
$ ./nested 
Task 2: thread 3 of the 11 children of 0: handling iter 6 
Task 2: thread 3 of the 11 children of 0: handling iter 7 
Task 2: thread 1 of the 11 children of 0: handling iter 2 
Task 2: thread 1 of the 11 children of 0: handling iter 3 
Task 1: thread 2 of the 5 children of 1: handling iter 8 
Task 1: thread 2 of the 5 children of 1: handling iter 9 
Task 1: thread 2 of the 5 children of 1: handling iter 10 
Task 1: thread 2 of the 5 children of 1: handling iter 11 
Task 2: thread 6 of the 11 children of 0: handling iter 12 
Task 2: thread 6 of the 11 children of 0: handling iter 13 
Task 1: thread 0 of the 5 children of 1: handling iter 0 
Task 1: thread 0 of the 5 children of 1: handling iter 1 
Task 1: thread 0 of the 5 children of 1: handling iter 2 
Task 1: thread 0 of the 5 children of 1: handling iter 3 
Task 2: thread 5 of the 11 children of 0: handling iter 10 
Task 2: thread 5 of the 11 children of 0: handling iter 11 
Task 2: thread 0 of the 11 children of 0: handling iter 0 
Task 2: thread 0 of the 11 children of 0: handling iter 1 
Task 2: thread 2 of the 11 children of 0: handling iter 4 
Task 2: thread 2 of the 11 children of 0: handling iter 5 
Task 1: thread 1 of the 5 children of 1: handling iter 4 
Task 2: thread 4 of the 11 children of 0: handling iter 8 
Task 2: thread 4 of the 11 children of 0: handling iter 9 
Task 1: thread 3 of the 5 children of 1: handling iter 12 
Task 1: thread 3 of the 5 children of 1: handling iter 13 
Task 1: thread 3 of the 5 children of 1: handling iter 14 
Task 2: thread 7 of the 11 children of 0: handling iter 14 
Task 2: thread 7 of the 11 children of 0: handling iter 15 
Task 1: thread 1 of the 5 children of 1: handling iter 5 
Task 1: thread 1 of the 5 children of 1: handling iter 6 
Task 1: thread 1 of the 5 children of 1: handling iter 7 
Task 1: thread 3 of the 5 children of 1: handling iter 15 

更新:我已經改變了包含的上述線程祖先;出現了混淆,因爲有(例如)兩個「線1」的打印 - 在這裏我還印出了祖先(例如,「1的5個孩子的線程1」與0個11個孩子的線程1 「)。

OpenMP standard,S.3.2.4,「的omp_get_thread_num例程返回線程數,目前球隊內,調用線程的。 」和第2.5節,「當線程遇到並行構造時,創建一個線程組到 執行並行區域[...]遇到並行構造的線程 成爲新團隊的主線程, ,新的並行區域的持續時間爲 ,線程號爲零。

也就是說,在這些(嵌套的)並行區域的每一個內,創建線程組,其線程ID從零開始;但僅僅因爲這些ID在隊內重疊並不意味着它們是相同的線程。在這裏,我強調了通過打印祖先的號碼,但如果線程正在做CPU密集型工作,您還會看到有監視工具確實有16個活動線程,而不僅僅是11.

原因爲什麼他們是團隊本地線程號碼而不是全局唯一的線程號碼非常簡單;在嵌套和動態並行性可能發生的環境中跟蹤全局唯一的線程數幾乎是不可能的。假設有三組線程,分別編號爲[0..5],[6,.. 10]和[11..15],中間的團隊完成。我們是否在線程編號中留下空白?我們是否會中斷所有線程以更改其全局數字?如果一個新的團隊開始,有7個線程會怎麼樣?我們是在6點開始並且有重疊的線程ID,或者我們是從16開始,並在編號方面留下空隙?

+0

感謝您的出色答案!但是,實現的一個主要缺陷是,實際上,較小的線程組基本上是較大線程組的一個子集。在你的例子中,task2也涉及所有在task1上工作的線程。我希望有些線程只適用於task1,而其他所有線程都適用於task2,但不要觸及task1。在我的應用程序中,兩個線程組之間不能有任何重疊。否則,由於地區利用率低,性能可能會大幅下降。 – SciPioneer 2014-08-30 03:44:12

+0

@SciPioneer - 不,完全沒有;兩隊在線內數字重疊,但它們是不同的線程。我已經添加了一個更新來澄清這一點。 – 2014-08-30 04:28:29

+0

感謝您的澄清!真的幫了很多! – SciPioneer 2014-08-31 02:02:53