Distributed parallel programming in Haskell using MPI.
MPI is defined by the Message-Passing Interface Standard, as specified by the Message Passing Interface Forum. The latest release of the standard is known as MPI-2. These Haskell bindings are designed to work with any standards compliant implementation of MPI-2. Examples are MPICH2: http://www.mcs.anl.gov/research/projects/mpich2 and OpenMPI: http://www.open-mpi.org.
In addition to reading these documents, users may also find it beneficial to consult the MPI-2 standard documentation provided by the MPI Forum: http://www.mpi-forum.org, and also the documentation for the MPI implementation linked to this library (that is, the MPI implementation that was chosen when this Haskell library was compiled).
Control.Parallel.MPI.Fast
contains a high-performance interface for working with (possibly mutable) arrays of storable Haskell data types.
Control.Parallel.MPI.Simple
contains a convenient (but slower) interface for sending arbitrary serializable Haskell data values as messages.
Control.Parallel.MPI.Internal
contains a direct binding to the C interface.
Control.Parallel.MPI.Base
contains essential MPI functionality which is independent of the message passing API. This is re-exported by the Fast and Simple modules, and usually does not need to be explcitly imported itself.
Notable differences between Haskell-MPI and the standard C interface to MPI:
Some collective message passing operations are split into send and receive parts to facilitate a more idiomatic Haskell style of programming. For example, C provides the
MPI_Gather
function which is called by all processes participating in the communication, whereas Haskell-MPI providesgatherSend
andgatherRecv
which are called by the sending and receiving processes respectively.The order of arguments for some functions is changed to allow for the most common patterns of partial function application.
Errors are raised as exceptions rather than return codes (assuming that the error handler to
errorsThrowExceptions
, otherwise errors will terminate the computation just like C interface).
Below is a small but complete MPI program. Process 1 sends the message "Hello World"
to process 0, which in turn receives the message and prints it to standard output. All other processes, if there are any, do nothing.
module Main where
import Control.Parallel.MPI.Simple (mpiWorld, commWorld, unitTag, send, recv)
main :: IO ()
main = mpiWorld $ \size rank ->
if size < 2
then putStrLn "At least two processes are needed"
else case rank of
0 -> do (msg, _status) <- recv commWorld 1 unitTag
putStrLn msg
1 -> send commWorld 0 unitTag "Hello World"
_ -> return ()