2010-09-26 28 views
1

我想通過拆分節點上的工作(第二部分與matricies)使用OPENmpi處理數組中的數據來運行一些測試。我現在遇到了一些問題,因爲數據數組每次都被初始化,我不知道如何防止這種情況發生。使用openmpi初始化一個數組一次

如何使用ANSI C創建可變長度數組,使用OPENmpi一次?我試圖讓它靜態和全局,但沒有。

#define NUM_THREADS 4 
#define NUM_DATA 1000 

static int *list = NULL; 

int main(int argc, char *argv[]) { 
    int numprocs, rank, namelen; 
    char processor_name[MPI_MAX_PROCESSOR_NAME]; 
    int n = NUM_DATA*NUM_DATA; 
    printf("hi\n"); 
    int i; 
    if(list == NULL) 
    { 
    printf("ho\n"); 
    list = malloc(n*sizeof(int)); 

    for(i = 0 ; i < n; i++) 
    { 
     list[i] = rand() % 1000; 
    } 
    } 

    int position; 

    MPI_Init(&argc, &argv); 
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
    MPI_Get_processor_name(processor_name, &namelen); 
    printf("Process %d on %s out of %d\n", rank,processor_name, numprocs); 

    clock_t start = clock(); 

    position = n/NUM_THREADS * rank; 
    search(list,position, n/NUM_THREADS * (rank + 1)); 

    printf("Time elapsed: %f seconds\n", ((double)clock() - (double)start) /(double) CLOCKS_PER_SEC); 

    free(list); 

    MPI_Finalize(); 
    return 0; 
} 
+0

而不是給我們這樣一個冗長的代碼,請你更好的描述你的目標是什麼,你如何嘗試實現它們,以及你遇到的問題是什麼? – 2010-09-26 12:43:38

回答

2

可能最簡單的方法是讓rank 0進程在其他進程阻塞時進行初始化。然後一旦完成初始化,讓他們都開始工作。

一個基本的例子試圖打電話給你的搜索功能(注:這是幹編碼):

#define NUM_THREADS 4 
#define NUM_DATA 1000 

int main(int argc, char *argv[]) { 
    int *list; 
    int numprocs, rank, namelen, i, n; 
    int chunksize,offset; 
    char processor_name[MPI_MAX_PROCESSOR_NAME]; 

    n= NUM_DATA * NUM_DATA; 

    MPI_Status stat; 
    MPI_Init(&argc, &argv); 
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
    MPI_Get_processor_name(processor_name, &namelen); 

    //note you'll need to handle n%NUM_THREADS !=0, but i'm ignoring that for now 
    chunksize = n/NUM_THREADS; 

    if (rank == 0) { 
     //Think of this as a master process 
     //Do your initialization in this process 
     list = malloc(n*sizeof(int)); 

     for(i = 0 ; i < n; i++) 
     { 
     list[i] = rand() % 1000; 
     } 

     // Once you're ready, send each slave process a chunk to work on 
     offset = chunksize; 
     for(i = 1; i < numprocs; i++) { 
     MPI_Send(&list[offset], chunksize, MPI_INT, i, 0, MPI_COMM_WORLD); 
     offset += chunksize 
     } 

     search(list, 0, chunksize); 

     //If you need some sort of response back from the slaves, do a recv loop here 
    } else { 

     // If you're not the master, you're a slave process, so wait to receive data 

     list = malloc(chunksize*sizeof(int)); 
     MPI_Recv(list, chunksize, MPI_INT, 0, 0, MPI_COMM_WORLD, &stat); 

     // Now you can do work on your portion 
     search(list, 0, chunksize); 

     //If you need to send something back to the master, do it here. 
    } 

    MPI_Finalize(); 
} 
+0

謝謝你Dusty,這正是我所期待的。我甚至沒有考慮過排名第0的是主人,其餘的是工人,相反我是盲目尋找別的東西。 – amischiefr 2010-09-27 01:43:53

+0

沒問題。 0級主人是跨MPI的一個非常普遍的概念,所以我必須小心,不要做相反的事情,有時候把它當作我唯一的工具。 = d – Dusty 2010-09-27 03:42:32