How to Run Performance Tests
This document explains how to use performance tests packaged with ØMQ.
100Gb Ethernet Tests
This test presents performance result of ØMQ/4.3.2 on 100Gb Ethernet.
10Gb Ethernet Tests
This test presents performance result of ØMQ/4.3.2 on 10Gb Ethernet.
InfiniBand tests (version 2.0.6)
This test presents performance result of ØMQ/2.0.6 on top of SDP/InfiniBand stack.
Effect of copying on latency
The goal of this test was to show, how copying the message data (on the application level) affects the latency. One test was standard ØMQ latency test (local_lat and remote_lat) the other one was modified version of the test that copied the message data once on each peer.
10GbE Tests
This test presents performance result of ØMQ/0.3.1 on 10Gb Ethernet.
Tests on Linux Real-Time Kernel
We've tested ØMQ on top of a standard Linux kernel (SUSE Linux Enterprise Server 10 SP2) and real-time Linux kernel (SUSE Linux Enterprise Real Time 10 SP2). Our goal was to find out how the real-time Linux kernel improves latency jitter and specifically, how well it eliminates the latency peaks encountered occasionally with the standard Linux kernel.
InfiniBand Tests (version 0.3.1)
This test presents performance result of ØMQ/0.3.1 on top of SDP/InfiniBand stack.
ØMQ (version 0.3) Tests
The goal of the tests is to give the users overall impression of the performance characteristics of ØMQ/0.3 in terms of latency, throughput, scalability etc. Also, it can be thought of as a check to ensure that the new version of the software haven't lost the performance levels offered by the preceding versions.
Compiler Optimisation Tests
We performed several tests to measure latency and throughput for ØMQ test executables compiled using different compilers and different optimisation levels. The goal was to find out whether optimisation on compiler level yields any measurable performance improvement for ØMQ applications.
ØMQ (version 0.2) Tests
We've performed ØMQ version 0.2 tests in Intel's Low Latency Lab. The goal of the testing was to find out what the performance figures are, whether there is a performance decrease when compared to version 0.1 and find the bottlenecks of the system.
10Gb Ethernet Tests
To understand how ØMQ (version 0.1) scales to 10 gigabit Ethernet we've run couple of tests in Intel's Low Latency Lab in London. We've used 1 gigabit Ethernet as a baseline and compared the results with those acquired on 10 gigabit Ethernet.
Scaling on Multi-core Systems
As a preparation for version 0.2 of ØMQ with intended support for seamless scaling on multi-core boxes we've run couple of scaling tests with ØMQ version 0.1 in Intel's Low Latency Lab in London.
ØMQ (version 0.1) Tests
Below, results of latency and throughput tests for ØMQ lightweight messaging kernel are presented. Keep in mind that the results can be heavily influenced by the whole stack including network devices, NICs, processor, operating system etc. The only way to get the results relevant to your environment is to run the tests yourself.
ØMQ Nanosecond-precision Test
The original ØMQ tests were done without measuring latencies/density on per-message basis because gettimeofday function is quite slow and measuring time for each message would distort the results in serious manner. In this paper we are focusing on measuring the performance using RDTSC instruction available on many x86- and x86-64-based systems today.
Inter-thread Message Transfer Tests
To see the performance results for passing messages between threads see the Y-suite whitepaper.
Results of erlang - erlang and erlang - c++ benchmark over Gbit network
Results of a standard benchmark between the erlang-bingings, as well as erlang to c++. Included is a short tutorial on making this kind of test over the network.
(external link) Benchmarking ZeroMQ Erlang bindings over Gbit network.