7-8
Cisco SFS InfiniBand Host Drivers User Guide for Linux
OL-12309-01
Chapter 7 MVAPICH MPI
MPI Latency Test Performance
When the test completes successfully, you see output that is similar to the following:
# OSU MPI Bandwidth Test (Version 2.2)
# Size Bandwidth (MB/s)
1 3.352541
2 6.701571
4 10.738255
8 20.703599
16 39.875389
32 75.128393
64 165.294592
128 307.507508
256 475.587808
512 672.716075
1024 829.044908
2048 932.896797
4096 1021.088303
8192 1089.791931
16384 1223.756784
32768 1305.416744
65536 1344.005127
131072 1360.208200
262144 1373.802207
524288 1372.083206
1048576 1375.068929
2097152 1377.907100
4194304 1379.956345
MPI Latency Test Performance
This section describes the MPI Latency test performance. The MPI Latency test is another good test to
ensure that MPI and your installation are functioning properly. This procedure requires your ability to
log in to remote nodes without a login name and password, and it requires that the MPI directory is in
your PATH. To test MPI latency, perform the following steps:
Step 1 Log in to your local host.
Step 2 Create a text file containing the names of two hosts on which to run the test. These hostnames are likely
to be unique to your cluster. The first name should be the name of the host where you are currently
logged.
The following example shows one way to create a hostfile named hostfile that contains the hostnames
host1 and host2:
host1$ cat > /tmp/hostfile <<EOF
> host1
> host2
> EOF
host1$
Step 3 Run the MPI Latency test across multiple hosts. Use the mpirun command to launch MPI jobs. The
command uses these command-line parameters:
• The -np keyword to specify the number of processes
• The number of processes (an integer; use 2 for this test)
• The –hostfile keyword to specify a file containing the hosts on which to run