What about load balancing? e.g., there are 20-each of apps a, b, c, and d. One advantage of a broker is that it can farm out messages to the instance that is least busy. This is often accomplished using a simple round-robin approach where a recipient gets a new message for every message it is replied to, or something roughly similar to this.
If there's a single source of tasks, it can load balance them directly, without a broker in the middle. If there are several sources of tasks (clients) and several workers a component in the middle is the best solution. We call those components 'devices' in 0MQ world.
Hi,
First of all, a nice article. I haven't read about 0MQ but must say I am tempted to give it a try.
I always liked the idea of "no broker" achitecture as this is what we developed in my engineering college but in industry I have always worked with centralized broker based messaging middleware only.
I got few questions wrt 0MQ
a. Can it be setup to find the other nodes from ZooKeeper server?
b. How does 0MQ support the fault tolerance of the nodes?
c. Is there any support within 0MQ for handling rejected messages and/or bad messages? I know the app/node can do this itself but then I am sure distributed brokers can be also be developed to achieve this.
d. Is the message delivery guarantee / message ordering rely solely on the underlying protocol / dist broker data structure chosen or does 0MQ offers within API to achieve this?
regards
Navjot Singh
Hi Navjot,
I'm researching messaging architectures and middleware for a project myself, so I'm by no means an experienced 0MQ developer, but ZooKeeper + 0MQ seems to be a very plausible architecture to supplement a 0MQ messaging layer with a fully developed configuration management/discovery service. I'm also considering using JGroups instead of ZooKeeper for that purpose, as a monolithic JGroups network for service discovery and administration is plausible for smaller deployments.
Note that the above leaves the 0MQ and ZooKeeper/JGroups networks completely independent of each other. There is no need to shim 0MQ in as a transport-layer protocol for ZooKeeper or JGroups, just run both in parallel.
The attraction of the approach described above is that you can largely avoid having to develop your own 0MQ broker device and control/administration network topology.
With respect to your other questions..
b) It does not do this directly, other than to make it easy to create a fault tolerant topology. The argument for the 0MQ approach basically boils down to the fact that there is no universal definition of tolerance—it is application specific. Therefore it is better to make it easy to develop the specific type of tolerance required rather than attempt to make a universal but ultimately flawed and brittle tolerance system that is difficult to understand and test.
c) No, for largely the same reasons as b.
d) Message ordering is guaranteed by 0MQ, but delivery is not guaranteed.
Regards,
Josh
I think
Has the arrows inverted on 5 and 6.
You are right :(
I'll fix it once I have some time free.
pull request => s33.postimg.org fcstyp8v3/broker2.png
Forget it guys, I don't think he's around…
What about management of the firewalls in a distributed architecture? With a central broker it takes away some of the complexity of specifying the openings one would need.
Also, the lack of guaranteed delivery is quite a big downside - at least from my stand point (esb/broker architecture). My customers wouldn't want loose that, which is bad because of the other positive things with 0MQ…
Very good article indeed! Pragmatic, lean, and a clear vocabulary. Congratulations!
Does ZeroMQ
1)guarantees order of the messages at least from one sender to the other receiver.
2)Is is thread-safe, that means if my application is multi-threaded then do I have to explicitly make library thread-safe.
2)Does it supports synchronous events.
3)Does it supports persistence. Atleast is there a way where we can configure the property.
Thanks in advance….
Hi, I'm new to the idea of distributed systems, so please pardon if the question appears silly. In the first diagram I don't understand the existence of arrows #9 to #12. From the function pipeline, it'd appear that the processing finishes when App D is done. As per me, after #8, we have the output, which should take at most two more network hops.
I suppose this has to do something with SOA architecture, but I'll be happy if someone could explain.