MPI_Send中的致命错误,无效标记

问题描述:

我对编写/运行并行代码相当陌生。目前,我正在编写基本教程来编写并行代码,以获得对过程的感受。我的电脑使用的是Mpich的ubuntu。MPI_Send中的致命错误,无效标记

我试图运行题为代码“完整的并行程序来总结一个载体”此页上:http://condor.cc.ku.edu/~grobe/docs/intro-MPI.shtml

,并提示输入/输入号码后,我遇到在执行时出现以下错误:

Fatal error in MPI_Send: Invalid tag, error stack: 
MPI_Send(174): MPI_Send(buf=0x7ffeab0f2d3c, count=1, MPI_INT, dest=1, tag=1157242880, MPI_COMM_WORLD) failed 
MPI_Send(101): Invalid tag, value is 1157242880 

我也收到了警告,而编译:

sumvecp.f90:41:23: 

    call mpi_send(vector(start_row),num_rows_to_send, mpi_real, an_id, send_data_tag, mpi_comm_world,ierr) 
         1 
Warning: Legacy Extension: REAL array index at (1) 

这是我的代码

program sumvecp 

include '/usr/include/mpi/mpif.h' 

parameter (max_rows = 10000000) 
parameter (send_data_tag = 2001, return_data_tag = 2002) 

integer my_id, root_proces, ierr, status(mpi_status_size) 
integer num_procs, an_id, num_rows_to_receive 
integer avg_rows_per_process, num_rows,num_rows_to_send 

real vector(max_rows), vector2(max_rows), partial_sum, sum 


root_process = 0 

call mpi_init(ierr) 

call mpi_comm_rank(mpi_comm_world,my_id,ierr) 
call mpi_comm_size(mpi_comm_world,num_procs,ierr) 

if (my_id .eq. root_process) then 
    print *, "please enter the number of numbers to sum: " 
    read *, num_rows 
    if (num_rows .gt. max_rows) stop "Too many numbers." 

    avg_rows_per_process = num_rows/num_procs 

    do ii = 1,num_rows 
     vector(ii) = float(ii) 
    end do 

    do an_id = 1, num_procs -1 
     start_row = (an_id*avg_rows_per_process) +1 
     end_row = start_row + avg_rows_per_process -1 
     if (an_id .eq. (num_procs - 1)) end_row = num_rows 
     num_rows_to_send = end_row - start_row + 1 

     call mpi_send(num_rows_to_send, 1, mpi_int, an_id, send_data_tag, mpi_comm_world,ierr) 

     call mpi_send(vector(start_row),num_rows_to_send, mpi_real, an_id, send_data_tag, mpi_comm_world,ierr) 
    end do 

    summ = 0.0 
    do ii = 1, avg_rows_per_process 
     summ = summ + vector(ii) 
    end do 

    print *,"sum", summ, "calculated by the root process." 

    do an_id =1, num_procs -1 
     call mpi_recv(partial_sum, 1, mpi_real, mpi_any_source, mpi_any_tag, mpi_comm_world, status, ierr) 

     sender = status(mpi_source) 
     print *, "partial sum", partial_sum, "returned from process", sender 
     summ = summ + partial_sum 
    end do 

    print *, "The grand total is: ", sum 

else 
    call mpi_recv(num_rows_to_receive, 1, mpi_int, root_process, mpi_any_tag, mpi_comm_world,status,ierr) 

    call mpi_recv(vector2,num_rows_to_received, mpi_real,root_process,mpi_any_tag,mpi_comm_world,status,ierr) 

    num_rows_received = num_rows_to_receive 

    partial_sum = 0.0 
    do ii=1,num_rows_received 
     partial_sum = partial_sum + vector2(ii) 
    end do 

    call mpi_send(partial_sum,1,mpi_real,root_process,return_data_tag,mpi_comm_world,ierr) 
endif 

call mpi_finalize(ierr) 
stop 
end 

您错过了IMPLICIT NONE并且您有大量未声明的变量。

报告的错误是因为

send_data_tag = 2001, return_data_tag = 2002 

隐含real变量,而不是integer秒。但是你可能有更多的问题。

首先添加IMPLICIT NONE并声明或变量。此外,我强烈建议将use mpi而不是include '/usr/include/mpi/mpif.h'它可能会帮助您找到更多的问题。


现在我看到代码是从某个网站复制的。我不会相信这个网站,因为这些代码显然是错误的。

+1

本周我们有很多问题都有相同的基本问题。 ** IMPLICIT NONE'是**必不可少**! –

+0

经过IMPLICIT NONE并声明未声明的变量后,它运行良好。从python来到Fortran。 (如果这不明显)谢谢! –