This FAQ is for Open MPI v4.x and earlier.
If you are looking for documentation for Open MPI v5.x and later, please visit docs.open-mpi.org.
Table of contents:
- What operating systems does Open MPI support?
- What hardware platforms does Open MPI support?
- What network interconnects does Open MPI support?
- What run-time environments does Open MPI support?
- Does Open MPI support LSF?
- How much MPI does Open MPI support?
- Is Open MPI thread safe?
- Does Open MPI support 32 bit environments?
- Does Open MPI support 64 bit environments?
- Does Open MPI support execution in heterogeneous environments?
- Does Open MPI support parallel debuggers?
1. What operating systems does Open MPI support? |
We primarily develop Open MPI on Linux and OS X.
Other operating systems are supported, however. The exact list of operating
systems supported has changed over time (e.g., native Microsoft
Windows support was added in v1.3.3, and although it was removed prior
to v1.8, is still supported through Cygwin). See the README file in
your copy of Open MPI for a listing of the OSes that that version
supports.
Open MPI is fairly POSIX-neutral, so it will run without too many
modifications on most POSIX-like systems. Hence, if we haven't listed
your favorite operating system here, it should not be difficult to get
Open MPI to compile and run properly. The biggest obstacle is
typically the assembly language, but that's fairly modular and we're
happy to provide information about how to port it to new platforms.
It should be noted that we are quite open to accepting patches for
operating systems that we do not currently support. If we do not have
systems to test these on, we probably will only claim to
"unofficially" support those systems.
2. What hardware platforms does Open MPI support? |
Essentially all the common platforms that the operating
systems listed in the previous question support.
For example, Linux runs on a wide variety of platforms, and we
certainly can't claim to support all of them. Open MPI includes
Linux-compiler-based assembly for support of Intel, AMD, and PowerPC
chips, for example.
3. What network interconnects does Open MPI support? |
Open MPI is based upon a component architecture; support for its MPI
point-to-point functionality only utilizes a small number of components
at run-time. Adding native support for a new network interconnect was
specifically designed to be easy.
The list of supported interconnects has changed over time. You should
consult your copy of Open MPI to see exactly which interconnects it
supports. The table below shows various interconnects and the
versions in which they were supported in Open MPI (in alphabetical
order):
Interconnect / Library stack name |
Support type |
Introduced in Open MPI series |
Removed after Open MPI series |
|
Elan |
BTL |
1.3 |
1.6 |
|
InfiniBand MXM |
MTL |
1.5 |
3.1 |
InfiniBand MXM |
PML |
1.5 |
|
|
InfiniBand / RoCE / iWARP Verbs |
BTL |
1.0 |
|
InfiniBand / RoCE / iWARP Verbs |
PML |
3.0 |
|
|
InfiniBand mVAPI |
BTL |
1.0 |
1.2 |
|
Libfabric |
MTL |
1.10 |
|
|
Loopback (send-to-self) |
BTL |
1.0 |
|
|
Myrinet GM |
BTL |
1.0 |
1.4 |
Myrinet MX |
BTL |
1.0 |
1.6 |
Myrinet MX |
MTL |
1.2 |
1.8 |
|
Portals |
BTL |
1.0 |
1.6 |
Portals |
MTL |
1.2 |
1.6 |
Portals4 |
MTL |
1.7 |
|
|
PSM |
MTL |
1.2 |
|
PSM2 |
MTL |
1.10 |
|
|
SCIF |
BTL |
1.8 |
3.1 |
|
SCTP |
BTL |
1.5 |
1.6 |
|
Shared memory |
BTL |
1.0 |
|
|
TCP sockets |
BTL |
1.0 |
|
|
uDAPL |
BTL |
1.2 |
1.6 |
|
uGNI |
BTL |
1.7 |
|
|
usNIC |
BTL |
1.8 |
|
|
Is there a network that you'd like to see supported that is not shown
above? Contributions are
welcome!
4. What run-time environments does Open MPI support? |
Open MPI is layered on top of the Open Run-Time Environment (ORTE),
which originally started as a small portion of the Open MPI code base.
However, ORTE has effectively spun off into its own sub-project.
ORTE is a modular system that was specifically architected to abstract
away the back-end run-time environment (RTE) system, providing a
neutral API to the upper-level Open MPI layer. Components can be
written for ORTE that allow it to natively utilize a wide variety of
back-end RTEs.
ORTE currently natively supports the following run-time environments:
- Recent versions of BProc (e.g., Clustermatic, pre-1.3 only)
- Sun Grid Engine
- PBS Pro, Torque, and Open PBS (the TM system)
- LoadLeveler
- LSF
- POE (pre-1.8 only)
- rsh / ssh
- Slurm
- XGrid (pre-1.3 only)
- Yod (Red Storm, pre-1.5 only)
Is there a run-time system that you'd like to use Open MPI with that
is not listed above? Component
contributions are welcome!
5. Does Open MPI support LSF? |
Starting with Open MPI v1.3, yes!
Prior to Open MPI v1.3, Platform (which is now IBM) released a script-based integration
in the LSF 6.1 and 6.2 maintenance packs around November of 2006. If
you want this integration, please contact your normal IBM support
channels.
6. How much MPI does Open MPI support? |
Open MPI 1.2 supports all of MPI-2.0.
Open MPI 1.3 supports all of MPI-2.1.
Open MPI 1.8 supports all of MPI-3.
Starting with v2.0, Open MPI supports all of MPI-3.1
7. Is Open MPI thread safe? |
Support for MPI_THREAD_MULTIPLE (i.e., multiple threads
executing within the MPI library) and asynchronous message passing
progress (i.e., continuing message passing operations even while no
user threads are in the MPI library) has been designed into Open MPI
from its first planning meetings.
Support for MPI_THREAD_MULTIPLE was included in the first version of
Open MPI, but it only became robust around v3.0.0. Subsequent
releases continually improve reliability and performance of
multi-threaded MPI applications.
8. Does Open MPI support 32 bit environments? |
As far as we know, yes. 64 bit architectures have effectively
taken over the world, though, so 32-bit is not tested nearly as much
as 64-bit.
Specifically, most of the Open MPI developers only have 64-bit
machines, and therefore only test 32-bit in emulation mode.
9. Does Open MPI support 64 bit environments? |
Yes, Open MPI is 64 bit clean. You should be able to use Open
MPI on 64 bit architectures and operating systems with no
difficulty.
10. Does Open MPI support execution in heterogeneous environments? |
As of v1.1, Open MPI requires that the size of C, C++, and
Fortran datatypes be the same on all platforms within a single
parallel application, with the exception of types represented by
MPI_BOOL and MPI_LOGICAL — size differences in these types
between processes are properly handled. Endian differences between
processes in a single MPI job are properly and automatically handled.
Prior to v1.1, Open MPI did not include any support for data size or
endian heterogeneity.
11. Does Open MPI support parallel debuggers? |
Yes. Open MPI supports the TotalView API for parallel process
attaching, which several parallel debuggers support (e.g., DDT, fx2).
As part of v1.2.4 (released in September 2007), Open MPI also supports the
TotalView API for viewing message queues in running MPI processes.
See this FAQ entry for
details on how to run Open MPI jobs under TotalView, and this FAQ entry for
details on how to run Open MPI jobs under DDT.
NOTE: The integration of Open
MPI message queue support is problematic with 64 bit versions of
TotalView prior to v8.3:
- The message queues views will be truncated.
- Both the communicators and requests list will be incomplete.
- Both the communicators and requests list may be filled with wrong
values (such as an MPI_Send to the destination ANY_SOURCE).
There are two workarounds:
- Use a 32 bit version of TotalView
- Upgrade to TotalView v8.3
|