InfiniBand Tests

Introduction

This test presents performance result of ØMQ/0.3.1 on top of SDP/InfiniBand stack.

Environment

Box 1:

8-core AMD Opteron 8356, 2.3GHz
Mellanox ConnectX MT25408
Linux/Debian 4.0 (kernel version 2.6.24.7)
SDP (OFED 1.3)
ØMQ version 0.3.1

Box 2:

8-core Intel Xeon E5440, 2.83GHz
Mellanox ConnectX MT25408
Linux/Debian 4.0 (kernel version 2.6.24.7)
SDP (OFED 1.3)
ØMQ version 0.3.1

Boxes were connected by non-switched InfiniBand network.

Results

All the tests were run for message sizes of 1, 2, 4, 8, 16, 32, …, 32768 and 65536 bytes.

Latency

End-to-end latency is 23.54 us for 1 byte messages and remains pretty stable up to messages 1kB long (31.98 us). Latency is quite obviously higher for larger messages:

ib1.png

Throughput

Throughput gets to the maximum of 4.1 million messages per second for messages 8 bytes long:

ib2.png

As for bandwidth usage we are able to get to 1Gb/sec boundary for messages 64 bytes long. 2Gb/sec boundary is passed for messages 256 bytes long and 3Gb/s is reached for messages 1kB long:

ib3.png

Conclusion

Latency for small messages is as low as 23.5 us which we consider to be quite decent. For single stream of messages of up to 64kB we've been able to use ~4.5Gb/s out of 8Gb/s bandwidth available. We would possibly be able to do better for larger messages, however, this was out of the scope of the test.