Q-Logic IB6054601-00 D Switch User Manual


 
A – Benchmark Programs
Benchmark 2: Measuring MPI Bandwidth Between Two Nodes
A-2 IB6054601-00 D
Q
This benchmark always involves just two node programs. You can run it with the
command:
$ mpirun -np 2 -ppn 1 -m mpihosts osu_latency
The -ppn 1 option is needed to be certain that the two communicating processes
are on different nodes. Otherwise, in the case of multiprocessor nodes,
mpirun
might assign the two processes to the same node, and so the result would not be
indicative of the latency of the InfiniPath fabric, but rather of the shared memory
transport mechanism. Here is what the output of the program looks like:
# OSU MPI Latency Test (Version 2.0)
# Size Latency (us)
0 1.26
1 1.26
2 1.26
4 1.26
8 1.26
16 1.45
32 1.47
64 1.52
128 1.63
256 1.88
512 2.34
1024 3.25
2048 5.13
4096 7.34
8192 11.58
16384 20.25
32768 37.56
65536 78.69
131072 149.84
262144 287.49
524288 565.84
1048576 1119.18
2097152 2220.18
4194304 4424.59
The first column gives the message size in bytes, the second gives the average
(one-way) latency in microseconds. Again, this example is given to show the syntax
of the command and the format of the output, and is not meant to represent actual
values that might be obtained on any particular InfiniPath installation.
A.2
Benchmark 2: Measuring MPI Bandwidth Between Two Nodes
The osu_bw benchmark is meant to measure the maximum rate at which you can
pump data between two nodes. It also uses a ping-pong mechanism, similar to the
osu_latency code, except in this case, the originator of the messages pumps a
number of them (64 in the installed version) in succession using the non-blocking