Julia MPI Example
MPI is a protocol for communication between multiple compute nodes and allows for faster processing of data by distributing a problem between multiple nodes. This is similar to multithreading but with more individual resources as opposed to a single thread. The example below is a basic MPI example showing two ways to communicate between nodes with broadcasting and sending/receiving. These examples can be modified to fit your own problems and the code can be cloned from our GitLab. An environment and Slurm job script is also provided that can be used for running the examples.
Broadcast Example
using MPI
MPI.Init()
comm = MPI.COMM_WORLD
N = 5
root = 0
if MPI.Comm_rank(comm) == root
print(" Running on $(MPI.Comm_size(comm)) processes\n")
end
MPI.Barrier(comm)
if MPI.Comm_rank(comm) == root
A = [i*(1.0 + im*2.0) for i = 1:N]
else
A = Array{ComplexF64}(undef, N)
end
MPI.Bcast!(A, root, comm)
print("rank = $(MPI.Comm_rank(comm)), A = $A\n")
if MPI.Comm_rank(comm) == root
B = Dict("foo" => "bar")
else
B = nothing
end
B = MPI.bcast(B, root, comm)
print("rank = $(MPI.Comm_rank(comm)), B = $B\n")
if MPI.Comm_rank(comm) == root
f = x -> x^2 + 2x - 1
else
f = nothing
end
f = MPI.bcast(f, root, comm)
print("rank = $(MPI.Comm_rank(comm)), f(3) = $(f(3))\n")
Send and Receive Example
using MPI
MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
dst = mod(rank+1, size)
src = mod(rank-1, size)
N = 4
send_mesg = Array{Float64}(undef, N)
recv_mesg = Array{Float64}(undef, N)
fill!(send_mesg, Float64(rank))
rreq = MPI.Irecv!(recv_mesg, src, src+32, comm)
print("$rank: Sending $rank -> $dst = $send_mesg\n")
sreq = MPI.Isend(send_mesg, dst, rank+32, comm)
stats = MPI.Waitall!([rreq, sreq])
print("$rank: Received $src -> $rank = $recv_mesg\n")
MPI.Barrier(comm)