Rx timestamps deviate a lot, a warm-up issue?


I’m working on a wireless synchronization method using the DWM1000. For the current tests we have a
transmitter which sends at regular intervals, like so:

[code] // first transmit is delayed by the given interval (be safe with program delays)
uint32_t zulu_time = dwt_readsystimestamphi32() + interval;

	for (int i=0; i<messages; i++, zulu_time += interval) {

		/* Set transmission time */
		/* Start transmission, no response as anchor 0 */

		/* wait for interrupt by suspending, callbacks are called here */


write_txdata() composes the message and does:

/* Write frame data to DW1000 and prepare transmission */ dwt_writetxdata(13, tx_msg, 0); dwt_writetxfctrl(13, 0, 0);

So this should create messages with a rather high precision interval.

The transmitter is running permanently with an intermessage interval of 10ms.

On another node the receiver is started and reports the incomming messages like this:

[code]/* Receive callback (called by DW ISR function @dwt_isr) */
static void rx_ok_cb(const dwt_cb_data_t *cb_data)
static tdoa_message_t recv_msg;

a_callback_was_called = TRUE;
callback_status = cb_data->status;

read_rxdata(&recv_msg, cb_data->datalength);
chprintf(out, "%d, %d, %s\n", recv_msg.anchor_id, recv_msg.superframe_no,



The receiver loop is:

[code]for (;:wink: {


if (messages > 0) {
	if (messages == 0)


This works mostly as expected, but analyzing the data (took 100000 samples of th rx timestamp) shows a pretty strange

There are quite dramatic changes in deviation whenever the receiver is started, which go down to a certain (low) level after about 20-60s.
The noise floor is quite low (units are in mm @ speed of light):

Please also have a look at the APS013 … XTAL start up/settiling time will also give rise to transients, as well as temperature.


you mean APS013 ‘The implementation of two-way ranging with the DW1000’? This does not state
anything about start-up and the like.
I have to add, that the strong initial shift is deterministic and always starts after the receiving process
is running, i.e. it makes no difference to add a delay of 60s after initiializing the DWM1000!

I know temperature changes will have an influence (and might be quite hefty), but in my case this
is always the same effect, no matter what the ambient temperatures are.
It might still be that the IC temperature rises dramatically when the receiver starts up, which could
cause the effect, but I was hoping to find an explanation for this behaviour in the Decawave documentation.

We can easily avoid the trouble by simply dropping the first 60s of incoming data and then start synchronizing
the node, but it would put my mind at rest to have a good explanation for the effect.

I’ve attached the 2 images from my first post as a reference, as the links are probably not very durable.
Ah, ok you probably meant APS011 ‘Sources of error …’.

But I think the receiver start-up does not coincide with the crystal startup?

Sorry yes, I meant APS011 …

Yes, the DW1000 will be hotter if running in RX mode continuously, as it will be consuming quite a lot of current continuously. Generally about 5-10 degree hotter.