So, about "clock correction" in SS TWR - how does it actually work?

I apologize in advance that this is a much repeated question, however I do wish to be that post to “end all posts” on this topic.

So you have an initiator and a responder right? Both have their own clocks. Ideally if both clocks have the same accuracy and frequency, then that part of the inaccuracy caused will be non-existent. Why?

Let’s say I is the initiator. R is the responder. I could count 100 clock pulses in 1us, hypothetically, where as R counts 95 pulses. But when I received the responder packet, it will treat the timestamp as if R also counted 100 pulses, which isn’t the case in reality. This is where error is introduced.

To tackle this, I need a way to obtain R’s clock “performance info” and adjust the calculation accordingly.

My questions are:

  1. Is my understanding of the whole issue correct?

  2. Is there an (official) example or API by which I can adjust/compensate/offset the clock error/differences? Some advised me to look at ex_06a_ss_twr_init in dw1000_api_rev2p14 but embarrassingly I didn’t see any API that does the compensating?

  3. If there’s such API, do I have to do the compensation from time to time at high frequencies or very low frequencies as in perhaps every 5 minutes or even more infrequent?

  1. I think so but just to clarify, the issue is that the two devices have different clocks.

Imagine the responder had a response time of zero (clearly impossible but just pretend for a minute):

The initiator sends a message, this takes time to travel to the responder.
The responder sees that message and sends it’s response.
The response takes time to travel back to the initiator.
The initiator receives the response.
The initiator measures the time difference between sending the message and receiving the response.
By multiplying the time difference by the speed of light the initiator knows how far the messages have travelled.
The distance to the responder must be exactly half of this measured distance. All very simple.
If the initiator clock was wrong by 1% this would result in the measured range being wrong by 1%. Clocks are never that bad, they are typically only 1-2 ppm (parts per million). A 1 ppm error would be 1 mm every km measured.

So with zero delay the impact of real world clock errors would be insignificant.

Unfortunately the responder can’t respond instantly, it must send the response at some time after receiving the initial message.
This isn’t automatically problem, if we make the response time a previously agreed constant length then the initiator can simply subtract this constant from the measured time.

The complication is a combination of two factors; 1) that light travels very quickly, this means that the response time of the receiver will be far larger than the time the messages take to travel. This in turn means that small percentage errors in response time can have a large impact in the result. 2) the two systems have different clocks and so a time measured by the responder may not result in exactly the same measurement on the initiator.

These combine to mean a tiny error in the response time can very easily become the largest error source in the system. e.g. If your response time was intended to be 1 ms but your clock was wrong by 1 part per million then the delay would be wrong by 1 ns. If the initiator subtracts a time that is wrong by 1ns then it will get a range error of ~30 cm.

  1. There isn’t an API directly because the method you use to correct for the clock is up to you. You can read the clock error from the Carrier Recovery Integrator Register. This will give you an estimate of the difference in clock speed between the transmitters and the receivers clock. You can then use the to scale the response time delay used in your calculations.
    Remember this is the clock difference between two devices which means you’ll have a different value for each initiator/responder pair.

  2. The simple method is to read the clock error for every received packet and use that value. Why only update something every few minutes when the data is there every packet? This also saves having to track the current errors for each device in the system which makes the code simpler.
    Individual measurements will have a little noise in them. A more advanced approach would be to keep track of the errors on each device and perform some smoothing and averaging of the values to reduce noise. Generally the values will only drift slowly and so some form of filtering will in theory reduce the noise.

1 Like

Incredibly detailed and clear answer, thank you AndyA. I’m glad my guess wasn’t wrong by like a mile or something.

I factored in the clock differences by using an API found in that example, so far everything works so good. Hope it lasts and stands the test of time.

Thank you once again Andy.