Contents
Publishing to a topic
See also: rospy.Publisher Code API
You can create a handle to publish messages to a topic using the rospy.Publisher class. The most common usage for this is to provide the name of the topic and the message class/type of the topic. You can then call publish() on that handle to publish a message, e.g.:
pub = rospy.Publisher('topic_name', std_msgs.msg.String, queue_size=10) pub.publish(std_msgs.msg.String("foo"))
rospy.Publisher initialization
rospy.Publisher(topic_name, msg_class, queue_size)
The only required arguments to create a rospy.Publisher are the topic name, the Message class, and the queue_size. Note: queue_size is only available in Hydro and newer. e.g.
pub = rospy.Publisher('topic_name', std_msgs.msg.String, queue_size=10)
There are additional advanced options that allow you to configure the Publisher:
subscriber_listener=rospy.SubscribeListener
Receive callbacks via a rospy.SubscribeListener instance when new subscribers connect and disconnect.
tcp_nodelay=False
Enable TCP_NODELAY on publisher’s socket, which disables Nagle algorithm on TCPROS connections. This results in lower latency publishing at the cost of efficiency.
latch=False
Enable 'latching' on the connection. When a connection is latched, a reference to the last message published is saved and sent to any future subscribers that connect. This is useful for slow-changing or static data like a map. The message is not copied! If you change the message object after publishing, the change message will be sent to future subscribers.
headers=None (dict)
- Add additional key-value-pairs to headers for future connections.
queue_size=None (int) [New in Hydro]
This is the size of the outgoing message queue used for asynchronous publishing. Please find more detailed information below in the section "Choosing a good queue_size"
Publisher.publish()
There are three different ways of calling publish() ranging from an explicit style, where you provide a Message instance, to two implicit styles that create the Message instance on the fly.
Explicit style
The explicit style is simple: you create your own Message instance and pass it to publish, e.g.:
pub.publish(std_msgs.msg.String("hello world"))
Implicit style with in-order arguments
In the in-order style, a new Message instance will be created with the arguments provided, in order. The argument order is the same as the order of the fields in the Message, and you must provide a value for all of the fields. For example, std_msgs.msg.String only has a single string field, so you can call:
pub.publish("hello world")
std_msgs.msg.ColorRGBA has four fields (r, g, b, a), so we could call:
pub.publish(255.0, 255.0, 255.0, 128.0)
which would create a ColorRGBA instance with r, g, and b set to 255.0 and a set to 128.0.
Implicit style with keyword arguments
In the keyword style, you only initialize the fields that you wish to provide values for. The rest receive default values (e.g. 0, empty string, etc...). For std_msgs.msg.String, the name of its lone field is data, so you can call:
pub.publish(data="hello world")
std_msgs.msg.ColorRGBA has four fields (r, g, b, a), so we could call:
pub.publish(b=255)
which would publish a ColorRGBA instance with b=255 and the rest of the fields set to 0.0.
queue_size: publish() behavior and queuing
publish() in rospy is synchronous by default (for backward compatibility reasons) which means that the invocation is blocking until:
- the messages has been serialized into a buffer
- and that buffer has been written to the transport of every current subscriber
If any of the connections has connectivity problems that might lead to publish() blocking for an indefinite amount of time. This is a common problem when subscribing to topics via a wireless connection.
As of Hydro it is recommended to use the new asynchronous publishing behavior which is more in line with the behavior of roscpp.
In order to use the new behavior the keyword argument queue_size must be passed to subscribe() which defines the maximum queue size before messages are being dropped.
While the serialization will still happen synchronously when publish() is being invoked writing the serialized data to each subscribers connection will happen asynchronously from different threads. As a result only the subscribers having connectivity problems will not receive new messages.
If you are publishing faster than rospy can send the messages over the wire, rospy will start dropping old messages.
Note that there may also be an OS-level queue at the transport level, such as the TCP/UDP send buffer.
Choosing a good queue_size
It is hard to provide a rule of thumb for what queue size is best for your application, as it depends on many variables of your system. Still, for beginners who do not care much about message passing, here we provide some guidelines.
If you're just sending one message at a fixed rate it is fine to use a queue size as small as the frequency of the publishing.
If you are sending multiple messages in a burst you should make sure that the queue size is big enough to contain all those messages. Otherwise it is likely to lose messages.
Generally speaking using a bigger queue_size will only use more memory when you are actually behind with the processing - so it is recommended to pick a value which is bigger than it needs to be rather than a too small value.
But if your queue is much larger than it needs to be that will queue up a lot of messages if a subscriber is lagging behind. This might lead to messages arriving with large latency since all messages will be delivered in FIFO order to the subscriber once it catches up.
queue_size Omitted
If the keyword argument is omitted, None is passed or for Groovy and older ROS distributions the publishing is handled synchronously. As of Indigo not passing the keyword argument queue_size will result in a warning being printed to the console.
queue_size None
Not recommended. Publishing is handled synchronously which means that one blocking subscriber will block all publishing. As of Indigo passing None will result in a warning being printed to the console.
queue_size Zero
While a value of 0 means an infinite queue, this can be dangerous since the memory usage can grow infinitely and is therefore not recommended.
queue_size One, Two, Three
If your system is not overloaded you could argue that a queued message should be picked up by the dispatcher thread within a tenth of a second. So a queue size of 1 / 2 / 3 would be absolutely fine when using 10 Hz.
Setting the queue_size to 1 is a valid approach if you want to make sure that a new published value will always prevent any older not yet sent values to be dropped. This is good for, say, a sensor that only cares about the latest measurement. e.g. never send older measurements if a newer one exists.
queue_size Ten or More
An example of when to use a large queue size, such as 10 or greater, is user interface messages (e.g. digital_io, a push button status) that would benefit from a larger queue_size to prevent missing a change in value. Another example is when you want to record all published values including the ones which would be dropped when publishing with a high rate / small queue size.
Complete example
Subscribing to a topic
See also: rospy.Subscriber Code API
Connection Information
A subscriber can get access to a "connection header", which includes debugging information such as who sent the message, as well information like whether or not a message was latched. This data is stored as the _connection_header field of a received message.
e.g.
print m._connection_header {'callerid': '/talker_38321_1284999593611', 'latching': '0', 'md5sum': '992ce8a1687cec8c8bd883ec73ca41d1', 'message_definition': 'string data\n\n', 'topic': '/chatter', 'type': 'std_msgs/String'}
We do not recommend using callerid information beyond debugging purposes, as it can lead to brittle code in an anonymous publish/subscribe architecture.