2013-04-22 67 views
1

使用MPI_Send時我有一個奇怪的錯誤--i嘗試發送二維數組(矩陣)的一部分時發生此錯誤:「MPI_matrixMultiplication.c:68: 99:錯誤:預期表達式','標記'之前。 特定的行是我嘗試發送一個部分,如果矩陣:MPI_Send(& a [beginPosition] [0],...); (正如你所看到的,我有這麼評論其他發送和接收與矩陣相關MPI錯誤:'''令牌之前的預期表達式

///////////////////////////////////////////////////////// 
// multiplication of 2 matrices, parallelized using MPI // 
///////////////////////////////////////////////////////// 
#include <stdio.h> 
#include <mpi.h> 

// must use #define here, and not simply int blahblahblah, because "c" doesnt like ints for array dimension :(
#define matrixARowSize 3  // size of the row for matrix A 
#define matrixAColumnSize 3 // size of the column for matrix A 
#define matrixBRowSize 3  // size of the row for matrix B 
#define matrixBColumnSize 3 // size of the column for matrix B 

// tags used for sending/receiving data: 
#define LOWER_BOUND 1 // first line to be processed 
#define UPPER_BOUND 2 // last line to be processed 
#define DATA  // data to be processed 

int a[matrixARowSize][matrixAColumnSize];  // matrix a 
int b[matrixBRowSize][matrixBColumnSize];  // matrix b 
int c[matrixARowSize][matrixBColumnSize];  // matrix c 
int main() 
{ 
    int currentProcess; // current process 
    int worldSize;  // world size 
    int i, j, k;  // iterators 
    int rowsComputedPerProcess;  // how many rows of the first matrix should be computed in each process 
    int numberOfSlaveProcesses;  // the number of slave processes 
    int processesUsed;  //how many processes of the available ones are actually used 

    MPI_Init(NULL, NULL);  // MPI_Init() 
    MPI_Comm_size(MPI_COMM_WORLD, &worldSize);  // get the world size 
    MPI_Comm_rank(MPI_COMM_WORLD, &currentProcess);  // get current process 

    numberOfSlaveProcesses = worldSize - 1;  // 0 is the master, rest are slaves 
    rowsComputedPerProcess = worldSize > matrixARowSize ? 1 : (matrixARowSize/numberOfSlaveProcesses); 
    processesUsed = worldSize > matrixARowSize ? matrixARowSize : numberOfSlaveProcesses; 

    /* 
    * in the first process (the father); 
    * initialize the 2 matrices, then start splitting the data to the slave processes 
    */ 
    if (!currentProcess)  // in father process 
    { 
     printf("rows per process: %d\n", rowsComputedPerProcess); 
     printf("nr of processes used: %d\n", processesUsed); 
     // init matrix A 
     for(i = 0; i < matrixARowSize; ++i) 
      for(j = 0; j < matrixAColumnSize; ++j){ 
       a[i][j] = i + j + 1; 
       // printf("%d\n", a[i][j]); 
       // printf("%d\n", *(a[i] + j)); 
      } 

     // init matrix B 
     for(i = 0; i < matrixBRowSize; ++i) 
      for(j = 0; j < matrixBColumnSize; ++j) 
       b[i][j] = i + j + 1; 

     // start sending data to the slaves for them to work >:) 
     int beginPosition; // auxiliary values used for sending the offsets to slaves 
     int endPosition; 
     for(i = 1; i < processesUsed; ++i)  // the last process is dealt with separately 
     { 
      beginPosition = (i - 1)*rowsComputedPerProcess; 
      endPosition = i*rowsComputedPerProcess; 
      MPI_Send(&beginPosition, 1, MPI_INT, i, LOWER_BOUND, MPI_COMM_WORLD); 
      MPI_Send(&endPosition, 1, MPI_INT, i, UPPER_BOUND, MPI_COMM_WORLD); 
      MPI_Send(&a[beginPosition][0], ((endPosition - beginPosition)*matrixARowSize), MPI_INT, i, DATA, MPI_COMM_WORLD); 
      // MPI_Send(a[beginPosition], (endPosition - beginPosition)*matrixARowSize, MPI_INT, i, DATA, MPI_COMM_WORLD); 
      // for(j = beginPosition; j < endPosition; ++j) 
      // for (k = 0; k < matrixAColumnSize; ++k) 
      // { 
      //  printf("%d ", *(a[j] + k)); 

      // } 
      // printf("\n"); 
      // printf("beg: %d, end: %d\n", beginPosition, endPosition); 
      // printf(" data #%d\n", (endPosition - beginPosition)*matrixARowSize); 
     } 

     // deal with last process 
     beginPosition = (i - 1)*rowsComputedPerProcess; 
     endPosition = matrixARowSize; 
     MPI_Send(&beginPosition, 1, MPI_INT, i, LOWER_BOUND, MPI_COMM_WORLD); 
     MPI_Send(&endPosition, 1, MPI_INT, i, UPPER_BOUND, MPI_COMM_WORLD); 
     // MPI_Send(a[beginPosition], (endPosition - beginPosition)*matrixARowSize, MPI_INT, i, DATA, MPI_COMM_WORLD); 
     // printf("beg: %d, end: %d\n", beginPosition, endPosition); 
     // printf(" data #%d\n", (endPosition - beginPosition)*matrixARowSize); 
    } 
    else {  // if this is a slave (rank > 0) 
     int beginPosition; // auxiliary values used for sending the offsets to slaves 
     int endPosition; 

     MPI_Recv(&beginPosition, 1, MPI_INT, 0, LOWER_BOUND, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 
     MPI_Recv(&endPosition, 1, MPI_INT, 0, UPPER_BOUND, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 
     // MPI_Recv(a[beginPosition], (endPosition - beginPosition)*matrixARowSize, 0, DATA, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 

     for(i = beginPosition; i < endPosition; ++i) { 
      for (j = 0; j < matrixAColumnSize; ++j) 
       printf("(# %d, i=%d, j=%d: %d ", currentProcess, i, j, a[i][j]); 
      // printf("\n"); 
     } 

    } 


    MPI_Finalize(); 
    return 0;  // bye-bye 
} 

回答

1

DATA不變的是空

#define DATA  // data to be processed 

你想要做的事:。

MPI_Send(&a[beginPosition][0], ((endPosition - beginPosition)*matrixARowSize), MPI_INT, i, , MPI_COMM_WORLD); 

在邏輯上生成expected expression before ',' token錯誤。

+0

hehehe,我真的沒有看到那一個:)。感謝zakinster! – TheBestPessimist 2013-04-22 10:19:56

+0

那麼,如果你看看錯誤的列號(不僅是行號),你會看到它停在'DATA'常量上。 – zakinster 2013-04-22 10:23:07

+0

還有一個問題:你如何使代碼只是你的短語的一部分?例如你上一次答案中的「數據」? – TheBestPessimist 2013-04-22 10:37:17

相關問題