
in discussion Hidden / Per page discussions » Broker vs. Brokerless
Almost 12… drum rolls
Main
ØMQ Community
Development
Almost 12… drum rolls
Hi there,
I am new why zmq and this is my first time with independent processors.
I'm not sure I can do what I want with ZMQ, that's why I would like your opinion.
Here is my case:
- First process (1): it runs continuously, it contains traffic that changes over time.
- Second process (2): it is a solver that ensures that there is no conflict in my traffic.
Currently, I use files to communicate my two processes. (1) refreshes every 10 seconds a traffic file and (2) reads it if it exists when it starts a new algorithm generation (I skip the details).
(2) very regularly updates a solution file, which (1) will read every 10 seconds.
These reads/writes are asynchronous and my current version is probably not 100% stable because you have to constantly check that there is no write when one reads and vice versa… You see.
I wanted to know if a double PUSH - PULL operations could work here, in your opinion?
A small detail bothers me: my solution file can change before (1) reads it (because a better solution is found) and therefore I would have to be able to delete elements in the queue if they have not yet been read …
Thank you in advance for your comments/advice/opinions.
Sier
11 :)
hello all
I am new to zeromq, tried to compile/run the server/client example in C.
While i was able to compile I had issues executing the server, got a seg fault on the assert(rc ==0) (see code below), wanted to ask what the purpose of the assertion was ?
I was able to get the example to work when I comment out the assert command
On Ubuntu 20.04 LTS, using czmq
// Hello World server
#include <czmq.h>
int main (void)
{
// Socket to talk to clients
zsock_t *responder = zsock_new (ZMQ_REP);
int rc = zsock_bind (responder, "tcp://*:5555");
assert (rc == 0);
while (1) {
char *str = zstr_recv (responder);
printf ("Received Hello\n");
sleep (1); // Do some 'work'
zstr_send (responder, "World");
zstr_free (&str);
}
return 0;
}
10!
Yep, almost 9 years.
This is an amazing article, very well written covering lot of aspects. Many thanks for sharing.
Nine years!
For those interested, I figured out my issue. I forgot to add the identity and empty frames at the beginning of the reply message.
The identity frame is necessary for the DealerSocket to figure out which caller's ReceiveReady event to invoke, so without those frames it will not invoke ANY RecieveReady events, as if the message wasn't even received. I am a little surprised that this socket didn't raise an exception indicating that the socket received a reply message that did not match up with a previously sent message (via the identity value).
I'm pretty sure I had the identity frame and empty frame at the beginning of the reply message, but I was switching back and forth between the Request/Response and Dealer/Router patterns and along the way I removed them thinking I was simplifying the reply message and forgot that they were necessary.
I am trying to use zmq with C# and Python and have run into an issue that I hoping someone can help me with. My DealerSocket's ReceiveReady event is not being invoked!
I am trying to implement an asynchronous client where a C#/NetMQ client sends a message to the Python/pyzmq 'server' and later gets a reply via a callback method.
On the C# side, I have a DealerSocket and I subscribe the method MyReceiveReady to the socket's ReceiveReady event. I also create a poller and add the client to the poller and I am calling the poller.RunAsync() method to start the poller. The client assembles a NetMQMessage and sends the message via the client.SendMultipartMessage(msg) method.
On the Python side, I have a RouterSocket that receives a message with the server.recv_multipart() method, performs some work with the contents of the message, then assembles a return message and replies to the client with the server.send_multipart(msg) method.
When I run the Python code and then the C# code, the C# client sends the message, the Python server receives the message and sends the reply, but the client's ReceiveReady event never invokes, so the client never receives the reply! If I implement the server in C# the client's event IS invoked, but, I have a requirement that the server be in Python and the client in C#.
Am I doing something wrong? Or, is this some deficiency or bug between NetMQ and pyzmq? Please help!
Here is what I've learned in dealing with this issue:
Finally, see the various answers that address this issue of "infinite wait":
stackoverflow[dot]com/questions/7538988/zeromq-how-to-prevent-infinite-wait
Agreed.
I have to watch Life of Brian again because of this comment =D
I'm using the majordomo pattern for my current setup. I'm using it to spread small requests out amongst a large number of workers. The nature of the services requires that everything be synchronous. With this in mind, I don't think I'm getting the peek performance out of ZMQ. Using the clients in the ZMQ Guide, I am unable to get the relative performance that the guide suggests. Instead of having round trip messages flowing at the rate of 9000+ per second, I seem to only be able to get 1000-2000 per second. So I'm unsure if this is related to the .net components that I'm using or if this is what the expected round trip overheard actually is.
My architecture is also spread over multiple pc's and VM's, so I'm sure that plays some role in it, but I'm not seeing the sub-millisecond response times I was expecting. Roundtrip message times seem to be in the 20-25ms range using the examples provided. Adding my own logic to these provided examples obviously adds more latency, but I'm wondering if these numbers that I'm seeing are expected or not.
running perf/*_lat on multi core machine latencies are worst
Below are the figures
16 cpu machine
./remote_lat tcp://127.0.0.1:5555 30 1000
message size: 30 [B]
roundtrip count: 1000
average latency: 97.219 [us]
single cpu machine
/remote_lat tcp://127.0.0.1:5555 30 1000
message size: 30 [B]
roundtrip count: 1000
average latency: 27.195 [us]
how is number of cpu making difference here?This is repeating in ubuntu ,red hat,centos
using libzmq 4.2.5
Hello
we have a difficult to find problem with a Python script, where an CNN for license plate recognition runs using Intel OpenVino and the communication to other programs is done using ZeroMQ. We are not sure what causes the problem, but as it causes so much network traffic that other PCs cannot communicate anymore, a problem with ZeroMQ seems possible.
Problem description:
We have a Python script, which runs 3 different CNNs using the Inference Engine from OpenVino 2018R5 on images from Ethernet cameras, which are retrieved with OpenCV VideoCapture. In addition ZeroMQ is used to pass results to other programs (via tcp). The used hardware is either an Intel NUC7BNH, an NUC7DNH or an NUC8BEH (on the NUC8 no freeze was observed until now). The OS is an Ubuntu 16.04 (with patched kernel 4.7.0.intel.r5.0 or kernel 4.15.0-15-generic (freezes happen less frequent with kernel 4.15). The script is running multiple times in separated Docker containers together with programs in other docker containers.
What happens is that the Linux freezes randomly after some time (sometimes after a few minutes, sometimes after a few hours but also two are now running for many days without a problem). When it freezes no ACPI shutdown works, the screen freezes and even the Magic SysRq keys have no effect. A strange side effect is that a lot of network traffic is created (so much traffic that the network dies and no PC on the switch can communicate). The logs (kern.log, syslog) show nothing special.
If anyone observed a similar problem or has an idea, what can cause this behavior, please let me know.
Greetings,
Thomas
Busy for 8 years!
Are there any links or examples of using ZMQ with VB or VB.net?
Maybe hit a button & start subscribing & showing a stream of published messages
Hello everyone,
I am trying out 0mq for Python <-> C++ IPC. What is interesting, whenever I send a request shorter than 5 characters from Python client to C++ server I get interesting result:
H+t
instead of just
H
But if I have at least 5 characters the message is parsed correctly… For Hello and longer I get what I send.
I don't define the buffer length anywhere so I am wondering.
In C++ server code looks like:
zmq::message_t request;
// Wait for next request from client
socket_.recv (&request);
std::istringstream iss(static_cast<char*>(request.data()));
std::string message;
iss >> message;
std::cout << "Received " << message << std::endl; // here I get strange output!
// Do some 'work'
sleep(1);
// Send reply back to client
...
print("Sending request to C++")
self.socket.send_string("H")
# Get the reply.
message = self.socket.recv()
print("Received reply [ {} ]".format(message.decode('utf-8')))
If I send short message from C++ to Python it is parsed correctly though… Interesting.
I am sending long messages so actually it shouldn't be that terrible atm, but it may become problematic at some point.
Thank you for any insight!
Hi,
I enjoyed reading your article and 0mq sounds great.
Yet, there are cases where you need these working threads to maintain a common control state that rarly changes.
How would you suggest to approach this issue?
Regards,
Tal.