SLIDE 8 8
Parallel Code Structure Parallel Code Structure
- The job of the master processor is to initialize the grid with i
The job of the master processor is to initialize the grid with initial and nitial and boundary conditions, then decompose it and send each processor t boundary conditions, then decompose it and send each processor the he information it needs. information it needs.
- Each slave processor receives its initial grid from the master n
Each slave processor receives its initial grid from the master node and
begins to perform calculations. After each iteration, individual begins to perform calculations. After each iteration, individual processors must pass the first and last column of their respecti processors must pass the first and last column of their respective grid to ve grid to neighboring processors to update its values. neighboring processors to update its values.
- The slave processors iterate until an acceptable convergence has
The slave processors iterate until an acceptable convergence has been been reached and then send the new temperature values back to the mas reached and then send the new temperature values back to the master ter processor to reassemble the grid. processor to reassemble the grid.
MPI Functions MPI Functions
- MPI Functions Called By All Processors
MPI Functions Called By All Processors
MPI_INIT(IERR)
MPI_FINALIZE(IERR)
- MPI_COMM_RANK(MPI_COMM_WORLD, MYID, IERR)
MPI_COMM_RANK(MPI_COMM_WORLD, MYID, IERR)
- MPI_COMM_SIZE(MPI_COMM_WORLD, NUMPROCS, IERR)
MPI_COMM_SIZE(MPI_COMM_WORLD, NUMPROCS, IERR)
- MPI Communication Operations
MPI Communication Operations
- MPI_SEND(BUFFER, COUNT, DATATYPE, DESTINATION, TAG,
MPI_SEND(BUFFER, COUNT, DATATYPE, DESTINATION, TAG, MPI_COMM_WORLD, IERR) MPI_COMM_WORLD, IERR)
- MPI_RECV(BUFFER,COUNT, DATATYPE, SOURCE, TAG,
MPI_RECV(BUFFER,COUNT, DATATYPE, SOURCE, TAG, MPI_COMM_WORLD, STATUS, IERR) MPI_COMM_WORLD, STATUS, IERR)