ØMQ Blog

ZeroMQ and Clojure, a brief introduction - 09 Sep 2010 07:45 - by martin_sustrik - Comments: 0

Nice blog on using 0MQ from Clojure by Antonio Garrote:


ZeroMQ: Modern & Fast Networking Stack - 03 Sep 2010 18:07 - by martin_sustrik - Comments: 0

Nice article about ØMQ by Ilya Grigorik:


Worth reading!

September Meetups - 02 Sep 2010 13:30 - by martin_sustrik - Comments: 0

We're organizing two ∅MQ meetups in September…

London — September 7th, 2010

Where: Theodore Bullfrog
Time: 6.30PM
Organizer: Martin Sustrik

View the map

Chicago — September 19th, 2010

Where: to be announced
Time: 6.00PM
Organizer: Pieter Hintjens

Want to organize your own ∅MQ meetup? Discuss it on zeromq-dev!

Multithreading Magic - 01 Sep 2010 16:05 - by pieterh - Comments: 0


In this article Pieter Hintjens and Martin Sustrik examine the difficulties of building concurrent (multithreaded) applications and what this means for enterprise computing. The authors argue that a lack of good tools for software designers means that neither chip vendors nor large businesses will be able to fully benefit from more than 16 cores per CPU, let alone 64 or more. They then examine an ideal solution, and explain how the ØMQ framework for concurrent software design is becoming this ideal solution. Finally they explain ØMQ's origins, and the team behind it.

Going multi-core

Until a few years ago, concurrent programming was synonymous with high-performance computing (HPC) and multithreading was what a word processor did to re-paginate a document while also letting you edit it. Multi-core CPUs were expensive and rare, and limited to higher-end servers. We achieved speed by getting more and more clock cycles out of single cores, which ran hotter and hotter.

Today, multi-core CPUs have become commodity items. While clock speeds are stable at around 2-3GHz, the number of cores per chip is doubling every 18-24 months. Moore's Law still applies. The spread of multi-core CPUs out from the data centre will continue so that netbooks and portable devices come with 2 or 4 cores and top-end server CPUs come with 64 cores. And this growth will continue, indefinitely.

Several factors drive this evolution. First, the need for CPU producers to compete. Whether or not we can use the power, we prefer to buy more capacity. Second, hitting the clock cycles ceiling, CPU designers have found multi-core to be the next way of scaling their architectures and offering more competitive products. Third, at the low end, the spread of multitasking operating systems like Android mean that those additional cores can immediately translate into performance. And lastly, at the high end, the high cost of a data centre slot for a blade computer (estimated by one investment bank at $50,000 per year) pushes users to demand more cores per blade.

It has been clear for some years that the future is massively multi-core, but the software industry is lagging behind. At the "High Performance on Wall Street 2008" event, speaker after speaker said the same thing: our software tools are not keeping up with hardware evolution.

The most widely used languages, C and C++, do not offer any support for concurrency. Programmers roll their own by using threading APIs. Languages that do support concurrency, such as Java, Python, .NET, and Ruby, do it in a brute-force manner. Depending on the implementation - there are over a dozen Ruby interpreters, for example - they may offer "green threading" or true multithreading. Neither approach scales, due to reliance on locks. It is left to exotic languages like Erlang to do it right. We'll see what this means in more detail later.

I could end this article by telling everyone to just switch to Erlang but that's not a realistic answer. Like many clever tools, it works only for very clever developers. But most developers - including those who more and more need to produce multithreaded applications - are just average.

So let's look at what the traditional approach gets wrong, what Erlang gets right, and how we can apply these lessons to all programming languages. Concurrency for average developers, thus.

The Painful State of the Art

A technical article from Microsoft titled "Solving 11 Likely Problems In Your Multithreaded Code" demonstrates how painful the state of the art is. The article covers .Net programming but these same problems affect all developers building multithreaded applications in conventional languages.

The article says, "correctly engineered concurrent code must live by an extra set of rules when compared to its sequential counterpart." This is putting it mildly. The developer enters a minefield of processor code reordering, data atomicity, and worse. Let's look at what those "extra rules" are. Note the rafts of new terminology the poor developer has to learn.

  • "Forgotten synchronization". When multiple threads access shared data, they step on each others' work. This causes "race conditions": bizarre loops, freezes, and data corruption bugs. These effects are timing and load dependent, so non-deterministic and hard to reproduce and debug. The developer must use locks and semaphores, place code into critical sections, and so on, so that shared data is safely read or written by only one thread at a time.
  • "Incorrect granularity". Having put all the dangerous code into critical sections, the developer can still get it wrong, easily. Those critical sections can be too large and they cause other threads to run too slowly. Or they can be too small, and fail to protect the shared data properly.
  • "Read and write tearing". Reading and writing a 32-bit or 64-bit value may often be atomic. But not always! The developer could put locks or critical sections around everything. But then the application will be slow. So to write efficient code, he must learn the system's memory model, and how the compiler uses it.
  • "Lock-free reordering". Our multithreaded developer is getting more skilled and confident and finds ways to reduce the number of locks in his code. This is called "lock-free" or "low-lock" code. But behind his back, the compiler and the CPU are free to reorder code to make things run faster. That means that instructions do not necessarily happen in a consistent order. Working code may randomly break. The solution is to add "memory barriers" and to learn a lot more about how the processor manages its memory.
  • "Lock convoys". Too many threads may ask for a lock on the same data and the entire application grinds to a halt. Using locks is, we discover as we painfully place thousands of them into our code, inherently unscalable. Just when we need things to work properly, they do not. There's no real solution to this except to try to reduce lock times and re-organize the code to reduce "hot locks" (i.e. real conflicts over shared data).
  • "Two-step dance". In which threads bounce between waking and waiting, not doing any work. This just happens in some cases, due to signalling models, and luckily for our developer, has no workarounds. When the application runs too slowly, he can tell his boss, "I think it's doing an N-step dance," and shrug.
  • "Priority inversion". In which tweaking a thread's priority can cause a lower-priority threads to block a higher-priority thread. As the Microsoft article says, "the moral of this story is to simply avoid changing thread priorities wherever possible."

This list does not cover the hidden costs of locking: context switches and cache invalidation that ruin performance.

Learning Erlang suddenly seems like a good idea.

Cost of the Traditional Approach

Apart from the difficulty of finding developers who can learn and properly use this arcane knowledge, the state of the art is expensive in other ways.

  • It is significantly more expensive to write and maintain such code correctly. We estimate from our own experience that it costs at least as 10 times and perhaps up to 100 times more than single-threaded code. And this while the cost of coding is falling elsewhere, thanks to cheaper and better tools and frameworks.
  • The approach does not scale beyond a few threads. Multithreaded applications mostly use two threads, sometimes three or four. This means we can't expect to fully exploit CPUs with 16 and more cores. So while we can get concurrency (for that word processor), we do not get scalability.
  • Even if the code is multithreaded, it does not really benefit from multiple cores. Individual threads are often blocking each other. And developers do not realize that their fancy multithreaded application is essentially reduced to single threadedness. Tools like Intel's ThreadChecker help diagnose this but there is no practical solution except to start designing the application again.
  • Even in the best cases, where applications are designed to avoid extensive locking, they do not scale above 10 cores. Locking, even when sparse and well-designed, does not scale. The more threads and CPU cores are involved, the less efficient it becomes. The law of diminishing returns hits particularly hard here.
  • To scale further, our developer must switch to 100% lock-free algorithms for data sharing. He is now into the realm of black magic. He exploits hardware instructions like compare-and-swap (CAS) to flip between instances of a data structure without locks. The code is now another order of magnitude more difficult and the skill set even more rare. Lose one developer, and the entire application can die.

All of this adds up to a mountain of cost. Yes, we can put lovely multi-core boxes into our data centres. But when we ask our development teams to build code to use those, and they start to follow industry practice, the results are horrendous.

For CPU producers, this cost creates a hard ceiling, where buyers will decide to stop buying newer CPUs because they cannot unlock the power of those extra cores.

Let's see how an ideal solution would work, and whether we can apply that to the real world of Java, C, C++, and .NET.

Towards an Ideal Solution

Of all the approaches taken to multithreading, only one is known to work properly. That means, it scales to any number of cores, avoids all locks, costs little more than conventional single-threaded programming, is easy to learn, and does not crash in strange ways. At least no more strangely than a normal single-threaded program.

Ulf Wiger summarizes the key to Erlang's concurrency thus:

  • "Fast process creation/destruction
  • Ability to support » 10 000 concurrent processes with largely unchanged characteristics.
  • Fast asynchronous message passing.
  • Copying message-passing semantics (share-nothing concurrency).
  • Process monitoring.
  • Selective message reception."

The key is to pass information as messages rather than shared state. To build an ideal solution is fairly delicate but it's more a matter of perspective than anything else. We need the ability to deliver messages to queues, each queue feeding one process. And we need to do this without locks. And we need to do this locally, between cores, or remotely, between boxes. And we need to make this work for any programming language. Erlang is great in theory but in practice, we need something that works for Java, C, C++, .Net, even Cobol. And which connects them all together.

If done right, we eliminate the problems of the traditional approach and gain some extra advantages:

  • Our code is thread-naive. All data is private. Without exception, all of the "likely problems of multithreading code" disappear: no critical sections, no locks, no semaphores, no race conditions. No lock convoys, 3am nightmares about optimal granularity, no two-step dances.
  • Although it takes care to break an application into tasks that each run as one thread, it becomes trivial to scale an application. Just create more instances of a thread. You can run any number of instances, with no synchronization (thus no scaling) issues.
  • The application never blocks. Every thread runs asynchronously and independently. A thread has no timing dependencies on other threads. When you send a message to a working thread, the thread's queue holds the message until the thread is ready for more work.
  • Since the application overall has no lock states, threads run at full native speed, making full use of CPU caches, instruction reordering, compiler optimization, and so on. 100% of CPU effort goes to real work, no matter how many threads are active.
  • This functionality is packaged in a reusable and portable way so that programmers in any language, on any platform, can benefit from it.

If we can do this, what do we get? The answer is nothing less than: perfect scaling to any number of cores, across any number of boxes. Further, no extra cost over normal single-threaded code. This seems too good to be true.

The ØMQ (ZeroMQ) Framework

ØMQ started life in 2007 as an iMatix project to build a low-latency version of our OpenAMQ messaging product, with Cisco and Intel as partners. From the start, ØMQ was focussed on getting the best possible performance out of hardware. It was clear from the start that doing multithreading "right" was the key to this.

We wrote then in a technical white paper that:

"Single threaded processing is dramatically faster when compared to multi-threaded processing, because it involves no context switching and synchronisation/locking. To take advantage of multi-core boxes, we should run one single-threaded instance of an AMQP implementation on each processor core. Individual instances are tightly bound to the particular core, thus running with almost no context switches."

ØMQ is special, and popular, for several reasons:

  • It's fully open source and is supported by a large active community. There are over 50 named contributors to the code, the bulk of them from outside iMatix.
  • It has developed an ultra-simple API based on BSD sockets. This API is familiar, easy to learn, and conceptually identical no matter what the language.
  • It implements real messaging patterns like topic pub-sub, workload distribution, and request-response. This means ØMQ can solve real-life use cases for connecting applications.
  • It seems to work with every conceivable programming language, operating system, and hardware. This means ØMQ connects entire applications as well as the pieces of applications.
  • It provides a single consistent model for all language APIs. This means that investment in learning ØMQ is rapidly portable to other projects.
  • It is licensed as LGPL code. This makes it usable, with no licensing issues, in closed-source as well as free and open source applications. And those who improve ØMQ automatically become contributors, as they publish their work.
  • It is designed as a library that we link with our applications. This means there no brokers to start and manage, and fewer moving pieces means less to break and go wrong.
  • It is above all simple to learn and use. The learning curve for ØMQ is roughly one hour.

And it has odd uses thanks to its tiny CPU footprint. As Erich Heine writes, "the [ØMQ] perf tests are the only way we have found yet which reliably fills a network pipe without also making cpu usage go to 100%".

Most ØMQ users come for the messaging and stay for the easy multithreading. No matter whether their language has multithreading support or not, they get perfect scaling to any number of cores, or boxes. Even in Cobol.

One goal for ØMQ is to get these "sockets on steroids" integrated into the Linux kernel itself. This would mean that ØMQ disappears as a separate technology. The developer sets a socket option and the socket becomes a message publisher or consumer, and the code becomes multithreaded, with no additional work.

Summary and Conclusion

Inevitably, CPUs big and small are moving toward multi-core architectures. But traditional software engineering has no tools to deal with this. The state of the art is expensive, fragile, and limited to a dozen cores at most. Specialized languages such as Erlang do multithreading "right", but remain out of reach of the worlds mainstream programmers. As a consequence, the potential of massively-multi-core systems remains untapped, and CPU producers find a software market that cannot use their latest products.

Hope comes in the shape of iMatix Corporation's ØMQ. From its origins as a messaging system, this small and modest library provides the power of a language like Erlang, but in a form that any developer can use. ØMQ looks like BSD sockets, it's trivial to learn, and yet lets developers create applications that scale perfectly to tens of thousands of processes.

Unlike traditional multithreading, which uses locks to share data, ØMQ passes messages between threads. Internally, it uses black magic "lock-free" algorithms, but these are hidden from the developer. The result is that ØMQ application code looks just like ordinary single-threaded code, except that it processes messages.

The ØMQ project was, from birth, built around an open source community of expert engineers from iMatix, Cisco, Intel, Novell, and other firms. The project has always focussed on benchmarking, transparent tests, and user-driven growth. Today the community develops the bulk of the code outside the ØMQ "core", including language bindings for everything from Ada to OpenCOBOL, including, of course, Java, C++, .Net, Python, etc.

For developers who want a simple messaging bus, ØMQ uses familiar APIs and takes literally less than an hour to install, learn and use.

For architects who need low-latency messaging for market data distribution or high-performance computing, ØMQ is an obvious candidate. It is free to use, easy to learn, and is extremely fast (measured on Infiniband at 13.4 usec end-to-end latencies and over 8M messages a second).

For architects who need scalability to multiple cores, ØMQ provides all that they need, no matter what their target languages and operating systems. As a plus, they can stretch the resulting applications across more boxes when they need to.

About iMatix Corporation

At iMatix we've done message-oriented multithreading since 1991, when we used this approach to build fast and scalable front-ends for OpenVMS. In 1996 we used a similar approach to build a fast lightweight web server (Xitami). The Xitami framework was single core, using "green threading" rather than real OS threads. We used this framework for many server-side applications.

From 2004 we worked with JPMorganChase to design AMQP, and its first implementation, OpenAMQ. JPMorgan migrated their largest trading application to OpenAMQ in 2006. Today OpenAMQ runs the Dow Jones Industrial Average and is widely used in less well-known places.

In 2009, iMatix released ØMQ/2.0 with its simpler socket-style API, and in 2010 announced that it would cease support for AMQP in its products, citing irremediable failures in the AMQP development process.

iMatix CEO Pieter Hintjens was the main designer of AMQP/0.8 and the editor of the de-facto standard AMQP/0.9.1. Martin Sustrik, architect and maintainer of ØMQ, worked for three years in the iMatix OpenAMQ team, and was project leader for the AMQP working group.

sewa apartemen jakarta

Informasi Properti Indonesia

Jasa Epoxy Lantai

Cat Epoxy Lantai

Not To Be Confused - 01 Sep 2010 11:25 - by pieterh - Comments: 2

The other day, I got this email:

From (***REDACTED***)
to moc.xitami|hp#moc.xitami|hp
date (***REDACTED***)
subject (***REDACTED***)

Dear iMatrix,

Recently I was sent a link to one of the blogs on your website, an article, which I found quite interesting.

However, to my dismay, I also realized that iMatrix had fallen into the trap of not doing their PR homework when choosing your company nomenclature.

In particular I am referring to the use of the unique Danish vowel 'Ø' as part of the abbreviation for your 0MQ messaging/parallelism library. To wit, the letter Ø is not just an extra element in the ASCII table that doubles for 0 (zero).

The Danish language has no less than three vowels, which are unique to it. They are Æ ('ae'), Ø ('oe') and Å ('aa'). The form of writing them with two letters is just a convenience for the benefit of non-Danes. The actual pronunciation is nothing like those vowels concatenated, that is why Danish has unique letters for those sounds.

Notice that these three letters are grouped in the extended ASCII table along with similar unique letters from other languages. Ie. 'Ö' is a very similar sounding letter, which is found in Swedish and German, among other languages.

Worse still, then these letters are extremely common in Danish, and your abbreviation ØMQ doesn't read anything like 'Zero somethingorother' in this language.

Here are some examples of Danish words using the letter Ø:

Ø : (Yes, the letter alone) Island.
Fanø : Name of a Danish Island. There are many more like this, as Denmark has hundreds of islands of varying sizes.
Sø : Lake.
Sønder : Southern.
Søndersø : Location name, 'The southern lake'.
Strøm : Flow or current, both of bodies of water and electricity.
Grøn : Green.
Sønderstrømfjord : Location name on Grønland (aka. Greenland to non-natives).
Sød : Sweet.
Dør : Door.
Øre : Ear.
Øje : Eye.
Køn : Pretty.
Kød : Meat.
Føre : To lead.
Køre : To drive.
Før : Before.
Købe : To buy.
København : Name of our capital, often translated into English as Copenhagen. 'Havn' means harbour, so København can be roughly translated as 'The trading harbour', which it is.
Grøft: Ditch (long depression in the soil for rainwater.)
Høj : Hill.
Grøfthøjparken : Name of the street, where I live. Means 'The park with the hill and the ditch'.
Rød : Red.
Fløde : Cream.
Grød : Porridge.
Rødgrød med fløde : Name of an old fashioned dessert, the name of which is completely impossible to pronounce correctly by anyone but native Danes. English and most other languages simply doesn't have the 'Ø' sound. This sentence is often used by Danish children to tease non-natives by trying to get them to say it. It is a red porridge (more like jello, actually) made from strawberries and served with chilled cream.

In other word: Ø is not a funny looking zero. It was included in the extended ASCII table because it is vital to writing Danish on a keyboard. I realize that historically the American IBM engineers used a shorthand with the slash over the zero to distinguish it from the letter 'O'. But the ASCII code for their slashed zero was the ASCII zero, not the Danish letter. They basically changed the optical image of zero for the convenience of humans and did not introducing a new character for the purpose.

So you guys are basically looking really strange to us over here. Do note that using ØMQ instead of 0MQ also ruins your ratings in the search engines. Only Danish keyboards will have the letter Ø directly available for typing, and we do not think of Ø when we hear the English word zero. So basically no-one, anywhere on the planet, will know how to correctly search for the name of your product.

Kind regards


The Danes, it seems, take their alphabet very seriøusly. The last thing we want is to annoy anyone, especially someone whose ancestors used to raid northern Scotland for grød and haggis. But the writer of this email, Mr (***REDACTED***), from (***REDACTED***) (if that is even his real name), shocks us with his lack of knowledge, not to mention his 'accidental' mispelling of iMatix. Ø and ∅ are two totally different letters. They don't even look remotely similar!

It's true that all of us have sometimes written ØMQ instead of ∅MQ by accident (the letters are like right next to each other), but I'm going to set the record straight. For all Danes, ∅MQ is pronounced "The-letter-not-to-be-confused-with-Ø-em-queue", or if you are a mathematician, "empty-set-times-M-times-Q". The rest of us can continue to say "zero-em-queue".

For the collectors of software trivia, ∅MQ is possibly the very first product ever to use a Unicode name. Where ∅MQ goes, Gøøgle cannot follow!

If you want to become a ØMQ blogger, contact us.

All posts

07 Feb 2011 10:21: Installing ZeroMQ and Java bindings on Ubunu 10.10 comments.png 0
03 Feb 2011 09:11: Python Multiprocessing With ØMQ comments.png 1
01 Feb 2011 21:29: ØMQ And QuickFIX comments.png 3
23 Dec 2010 09:33: Message Oriented Network Programming comments.png 0
19 Dec 2010 13:03: ØMQ/2.1.0 available on OpenVMS comments.png 0
13 Dec 2010 12:46: Load Balancing In Java comments.png 1
10 Nov 2010 10:01: Building a GeoIP server with ZeroMQ comments.png 0
31 Oct 2010 11:44: ØMQ blog by Gerard Toonstra comments.png 0
13 Oct 2010 10:47: ØMQ - The Game comments.png 0
09 Sep 2010 07:45: ZeroMQ and Clojure, a brief introduction comments.png 0
03 Sep 2010 18:07: ZeroMQ: Modern & Fast Networking Stack comments.png 0
02 Sep 2010 13:30: September Meetups comments.png 0
01 Sep 2010 16:05: Multithreading Magic comments.png 0
01 Sep 2010 11:25: Not To Be Confused comments.png 2
27 Aug 2010 05:39: ØMQ for Clojure comments.png 0
17 Aug 2010 07:20: ZeroMQ: What You Need to Know Braindump comments.png 0
03 Aug 2010 10:46: RFC: ØMQ Contributions, Copyrights and Control comments.png 1
02 Aug 2010 19:54: New - ØMQ Labs! comments.png 0
26 Jul 2010 14:02: Video Introduction to ØMQ comments.png 3
13 Jul 2010 20:19: ØMQ available at CPAN comments.png 1
12 Jul 2010 12:22: Welcome, Objective-C :-) comments.png 0
10 Jul 2010 13:42: Few Blogposts on ØMQ comments.png 2
29 Jun 2010 08:19: Ruby Gem for ØMQ/2.0.7 available! comments.png 0
23 Jun 2010 09:27: ZeroMQ an introduction comments.png 1
18 Jun 2010 10:15: Mongrel2 Is "Self-Hosting" comments.png 0
08 Jun 2010 21:25: Internet Worldview in Messaging World comments.png 0
07 Jun 2010 16:13: Berlin Buzzwords 2010 comments.png 1
05 Jun 2010 10:24: Loggly Switches to ØMQ comments.png 4
04 Jun 2010 18:05: ØMQ/2.0.7 (beta) released comments.png 12
24 May 2010 09:37: Building ØMQ and pyzmq on Red Hat comments.png 2
18 May 2010 06:30: To Trie or not to Trie comments.png 0
12 May 2010 13:02: ØMQ/2.0.6 on OpenVMS comments.png 3
08 May 2010 09:58: Zero-copy and Multi-part Messages comments.png 17
07 Apr 2010 10:23: The Long and Winding Road Behind comments.png 2
06 Apr 2010 16:41: Historical Highlights comments.png 0

Posts by date